Escolar Documentos
Profissional Documentos
Cultura Documentos
Index
Autonomy Enterprise Speech Analytics Understanding Speech Approaches to Speech Analytics Phonetic Searching Word Spotting Conceptual Understanding Language Independent Voice Analysis Advanced Analytics Automatic Query Guidance Hot and Breaking Topics Clustering Script Adherence Trend Analysis Sentiment Analysis Multi-Channel Interaction Analysis
1 1 2 3 3 4 5 5 6 6 6 6 6 6 7
Understanding Speech
In order to search, analyze, and retrieve speech information within the business, analytics technology must first be able to recognize and understand spoken communications. Because a speakers language, dialect, accent, or tone can affect the way words and phrases sound, legacy speech recognition technologies often misinterpret what is being said. Speech processing can be further complicated by external factors such as background noise, mode of communication, and the quality of the recording. Autonomys speech recognition engine accounts for the variability in speech by using a combination of acoustic models, a language model, and a pronunciation dictionary to form a hypothesis of what is being said. The acoustic model allows the speech engine to recognize the probability of a specific sound translating to a word or part of a word. The language model builds upon this to enable the system to determine the probability of one word following another to produce an accurate hypothesis of the spoken words. For example, the bog barked sounds very similar to the dog barked, but the probability of barked following dog is much greater than that of barked following bog. The language model can be adjusted to support industry-specific words and phrases so that they are recognized as probable. As more and more interactions take place, the system trains itself to recognize frequently used words and phrases and becomes more accurate over time.
This meaning-based approach enables the speech engine to form an understanding of spoken information based on the context of the interaction rather than relying on sound alone. By understanding the relationships that exist between words, Autonomys technology can effectively discern between homophones, homonyms, and other linguistic complexities that often lead to false positives with legacy methods.
By understanding the relationships that exist between words, Autonomys technology can effectively filter through speech that often lead to false positives with legacy methods.
Phonetic Searching
Phonemes are the smallest discreet sound-parts of language and form the basic components of any word. Phonetic searching attempts to break down words into their constituent phonemes and then match searched terms to combinations of phonemes as they occur in the audio stream. While this approach does not necessarily require full dictionary coverage as the user is able to suggest alternative pronunciation via different text-compositions, it is limited in its accuracy and inability to make conceptual matches. Phonetic searching is a commonly used approach to speech analytics because it emphasizes the way things sound rather than attempting a speech-to-text translation. However, because this method treats words solely as combinations of sound with no awareness of their context, it cannot differentiate between words and phrases that sound similar but have different conceptual meanings. As a result, this method frequently returns high levels of false positives. For example, the sentence The computer can recognize speech contains the same basic phoneme components as The oil spill will wreck a nice beach, while the meaning is entirely different. A phonetic-based speech engine would not be able to tell the difference. In addition, phonetic searching often cannot recognize when a base phoneme is actually a part of a larger, more complex word, such as cat in the word catastrophe or category. Phonetic searching methodology becomes extremely weak when the search involves very short words that contain only one or two syllables due to the vast numbers of potential matches.
Most of the competition uses Phonetics to process speech. With this method, phonetics looks for sounds irrespective of the words, they do not try and determine the meaning of the words.
Word Spotting
Word spotting is the process of recognizing isolated words by matching them to the sounds that are produced. As with phoneme matching, word spotting techniques search for words out of context, so they are unable to differentiate between words that sound alike but have completely different meanings. Because the system relies on exact sound matches, it is also unable to account for changes in pronunciation that affect sound, such as accents or plurals. Traditional approaches like phoneme processing and word spotting cannot account for multiple expressions of the same concept, such as the words supervisor and manager having the same conceptual meaning within a certain context. In this case, any information that is related to the search term but does not contain the same phonemes will not be retrieved, limiting the user to only a handful of relevant information. Because these methods cannot make conceptual associations, they often miss related information that is not included in the search terms.
Conceptual Understanding
Due to the variables in speech and language, legacy approaches like phonetic searching and word spotting alone are not enough to determine what is truly being said. While Autonomy supports phonetic and word-spotting methods for search and retrieval, it also delivers sophisticated audio recognition and analysis technology that allows end-users to search audio data from a number of sources, and further narrow results by topic, speaker, and level of emotion present in the recording or interaction. This solution supports both keyword searches and natural language queries to retrieve audio content within the enterprise. Because Autonomys technology understands the meaning of information, it delivers the ability to search the content of audio and video assets and does not rely on tagging or metadata to return accurate results. By automatically forming a conceptual understanding of speech information, Autonomy speech analytics delivers automatic and accurate retrieval of files containing audio without human intervention or manual definition of search terms, making it the market's most advanced form of speech analytics. Conceptual understanding further enables Autonomy's Intelligent Data Operating Layer (IDOL) to automatically categorize and analyze audio information based on its meaning to deliver advanced functionality such as clustering, trend analysis, and emotion detection.
Conceptual Clustering
Query: Madonna
Result Documents
Advanced Analytics
Autonomy delivers advanced analytic capabilities that extend far beyond keyword search functionality to uncover actionable information embedded in enterprise speech and audio assets, such as contact center interactions. Autonomys core technology, the Intelligent Data Operating Layer (IDOL) automatically processes audio and video data and exposes this intelligence to the entire enterprise through keyword and natural language search functionality, trend identification, cluster mapping, and other forms of advanced analysis. Using IDOL as the foundation for enterprise speech analytics, users can find matches to typed and spoken queries based on the main concepts and ideas that are present in data types with embedded audio information, even if different words and phrases are used to describe the same concepts. IDOLs conceptual search functionality additionally groups data with related meanings, automating many complex enterprise processes and simplifying information management.
Cluster Mapping
Trend Analysis
Clustering
Clustering is a unique feature that partitions information so that data with similar topics or concepts automatically clusters together without definition. This information is displayed in a two dimensional map, which allows the user to visualize the common themes that exist between interactions. Results are ranked by their conceptual similarity, which is essential to retrieving interactions most relevant to a query, even if they contain different key words.
Script Adherence
Script adherence functionality enables contact center, business, and compliance managers to automatically monitor voice interactions for a number of purposes. The application will compare any interaction whether it is conducted through voice, email, or chat to a model script and immediately alert managers to any significant deviation, enabling the immediate resolution of issues related to legal compliance, risk, fraud, or performance.
Trend Analysis
Trend analysis is crucial to identifying and responding to client, product, or operational issues that are discussed. By automatically grouping interactions with similar concepts, speech analytics can uncover emerging issues and automatically alert the business. This feature also identifies customer, market, and competitive trends over a specific amount of time, delivering timely information to departments such as sales, marketing, development, and customer service.
Sentiment Analysis
Sentiment analysis consists of speaker separation and the identification of heightened emotion and cross talk within an interaction, providing great detail to the business about the identity and emotional state of clients or customers. This feature works by displaying each speaker and areas of cross-talk in different colors in the media player when an interaction is played back. End-users can additionally search for interactions containing heightened emotion or filter a keyword search by whether they contain a certain degree of emotion.
Sentiment analysis is highly valuable to the business, as it aids in the understanding of customer attitudes, behaviors, expectations, and intentions. It also provides root cause analysis of interactions in which a customer was upset, angry, or confused, providing additional content for training and development.
About Autonomy
Autonomy Corporation plc (LSE: AU. or AU.L), a global leader in infrastructure software for the enterprise, spearheads the Meaning Based Computing movement. It was recently ranked by IDC as the clear leader in enterprise search revenues, with market share nearly double that of its nearest competitor. Autonomy's technology allows computers to harness the full richness of human information, forming a conceptual and contextual understanding of any piece of electronic data, including unstructured information, such as text, email, web pages, voice, or video. Autonomy's software powers the full spectrum of mission-critical enterprise applications including pan-enterprise search, customer interaction solutions, information governance, end-to-end eDiscovery, records management, archiving, business process management, web content management, web optimization, rich media management and video and audio analysis. Autonomy's customer base is comprised of more than 20,000 global companies, law firms and federal agencies including: AOL, BAE Systems, BBC, Bloomberg, Boeing, Citigroup, Coca Cola, Daimler AG, Deutsche Bank, DLA Piper, Ericsson, FedEx, Ford, GlaxoSmithKline, Lloyds TSB, NASA, Nestl, the New York Stock Exchange, Reuters, Shell, Tesco, T-Mobile, the U.S. Department of Energy, the U.S. Department of Homeland Security and the U.S. Securities and Exchange Commission. More than 400 companies OEM Autonomy technology, including Symantec, Citrix, HP, Novell, Oracle, Sybase and TIBCO.
The information contained in this document represents the current opinion as of the date of publication of Autonomy Systems Ltd. regarding the issues discussed. Autonomy's opinion is based upon our review of competitor product information publicly available as of the date of this document. Because Autonomy must respond to changing market conditions, it should not be interpreted to be commitment on the part of Autonomy, and Autonomy cannot attest to the accuracy of any information presented after the date of publication. This document is for informational purposes only; Autonomy is not making warranties, express or implied, in this document.
(Autonomy Inc. and Autonomy Systems Limited are both subsidiaries of Autonomy Corporation plc.)