Você está na página 1de 10

Autonomy Enterprise Speech Analytics

Index

Autonomy Enterprise Speech Analytics Understanding Speech Approaches to Speech Analytics Phonetic Searching Word Spotting Conceptual Understanding Language Independent Voice Analysis Advanced Analytics Automatic Query Guidance Hot and Breaking Topics Clustering Script Adherence Trend Analysis Sentiment Analysis Multi-Channel Interaction Analysis

1 1 2 3 3 4 5 5 6 6 6 6 6 6 7

Autonomy Enterprise Speech Analytics


Knowing the topics, sentiments and concepts that are being discussed in your business is critical to understanding and responding to the critical factors that undoubtedly affect market presence and profitability. By analyzing voice information that comes from routine customer interactions, voicemails, video, and other sources, speech analytics can have a profound impact on the way businesses manage customer service, sales and marketing, development, business strategy, risk, and liability. While voice recording and monitoring has become a mature market for many organizations, it is the ability to analyze and understand speech that enables businesses to reach a higher level of development and strategy than cannot be achieved through legacy speech technologies. Autonomy delivers meaning-based speech analytics to tap into enterprise audio information and extract relevant and actionable business intelligence. Speech analytics can be applied in a wide range of vertical markets for a variety of business purposes, including: Customer Intelligence Voice and Video Surveillance Rich Media Management Regulatory Compliance Risk Analysis eDiscovery and Litigation Fraud Detection Sales Verification Dispute Resolution

Understanding Speech
In order to search, analyze, and retrieve speech information within the business, analytics technology must first be able to recognize and understand spoken communications. Because a speakers language, dialect, accent, or tone can affect the way words and phrases sound, legacy speech recognition technologies often misinterpret what is being said. Speech processing can be further complicated by external factors such as background noise, mode of communication, and the quality of the recording. Autonomys speech recognition engine accounts for the variability in speech by using a combination of acoustic models, a language model, and a pronunciation dictionary to form a hypothesis of what is being said. The acoustic model allows the speech engine to recognize the probability of a specific sound translating to a word or part of a word. The language model builds upon this to enable the system to determine the probability of one word following another to produce an accurate hypothesis of the spoken words. For example, the bog barked sounds very similar to the dog barked, but the probability of barked following dog is much greater than that of barked following bog. The language model can be adjusted to support industry-specific words and phrases so that they are recognized as probable. As more and more interactions take place, the system trains itself to recognize frequently used words and phrases and becomes more accurate over time.

This meaning-based approach enables the speech engine to form an understanding of spoken information based on the context of the interaction rather than relying on sound alone. By understanding the relationships that exist between words, Autonomys technology can effectively discern between homophones, homonyms, and other linguistic complexities that often lead to false positives with legacy methods.

By understanding the relationships that exist between words, Autonomys technology can effectively filter through speech that often lead to false positives with legacy methods.

Approaches to Speech Analytics


Speech technology has gone through several phases of innovation, each one building upon the limitations of previous methods. Intelligent Voice Response systems built into telephony systems that allowed callers to press or say a limited number of key words such as yes and no that were already built into the system. Speech technology was eventually able to recognize more complex words and phrases but had trouble segmenting words without distinct pauses in the speech. Several phases of speech recognition followed, including phonetic indexing and word-spotting methods that improved accuracy but often produced false-positives and missed potentially relevant information. In response to the challenges presented by phoneme processing and word spotting techniques, language models were developed to give a more accurate recognition rate for complex words and phrases by using a dictionary and a pre-defined language model. Self-learning language models were introduced to automatically expand the system's vocabulary based on commonly used words. Today, a combination of language models, acoustic models, and advanced algorithms are used to understand the relationships that exist between words to form a conceptual understanding of their meaning. Autonomy supports all methods for speech processing, including phonetic searching, word spotting, Boolean and parametric methods, and conceptual understanding.

Phonetic Searching
Phonemes are the smallest discreet sound-parts of language and form the basic components of any word. Phonetic searching attempts to break down words into their constituent phonemes and then match searched terms to combinations of phonemes as they occur in the audio stream. While this approach does not necessarily require full dictionary coverage as the user is able to suggest alternative pronunciation via different text-compositions, it is limited in its accuracy and inability to make conceptual matches. Phonetic searching is a commonly used approach to speech analytics because it emphasizes the way things sound rather than attempting a speech-to-text translation. However, because this method treats words solely as combinations of sound with no awareness of their context, it cannot differentiate between words and phrases that sound similar but have different conceptual meanings. As a result, this method frequently returns high levels of false positives. For example, the sentence The computer can recognize speech contains the same basic phoneme components as The oil spill will wreck a nice beach, while the meaning is entirely different. A phonetic-based speech engine would not be able to tell the difference. In addition, phonetic searching often cannot recognize when a base phoneme is actually a part of a larger, more complex word, such as cat in the word catastrophe or category. Phonetic searching methodology becomes extremely weak when the search involves very short words that contain only one or two syllables due to the vast numbers of potential matches.

Most of the competition uses Phonetics to process speech. With this method, phonetics looks for sounds irrespective of the words, they do not try and determine the meaning of the words.

Word Spotting
Word spotting is the process of recognizing isolated words by matching them to the sounds that are produced. As with phoneme matching, word spotting techniques search for words out of context, so they are unable to differentiate between words that sound alike but have completely different meanings. Because the system relies on exact sound matches, it is also unable to account for changes in pronunciation that affect sound, such as accents or plurals. Traditional approaches like phoneme processing and word spotting cannot account for multiple expressions of the same concept, such as the words supervisor and manager having the same conceptual meaning within a certain context. In this case, any information that is related to the search term but does not contain the same phonemes will not be retrieved, limiting the user to only a handful of relevant information. Because these methods cannot make conceptual associations, they often miss related information that is not included in the search terms.

Conceptual Understanding
Due to the variables in speech and language, legacy approaches like phonetic searching and word spotting alone are not enough to determine what is truly being said. While Autonomy supports phonetic and word-spotting methods for search and retrieval, it also delivers sophisticated audio recognition and analysis technology that allows end-users to search audio data from a number of sources, and further narrow results by topic, speaker, and level of emotion present in the recording or interaction. This solution supports both keyword searches and natural language queries to retrieve audio content within the enterprise. Because Autonomys technology understands the meaning of information, it delivers the ability to search the content of audio and video assets and does not rely on tagging or metadata to return accurate results. By automatically forming a conceptual understanding of speech information, Autonomy speech analytics delivers automatic and accurate retrieval of files containing audio without human intervention or manual definition of search terms, making it the market's most advanced form of speech analytics. Conceptual understanding further enables Autonomy's Intelligent Data Operating Layer (IDOL) to automatically categorize and analyze audio information based on its meaning to deliver advanced functionality such as clustering, trend analysis, and emotion detection.

Results: Documents Containing Madonna

Conceptual Clustering

Query Search Further Suggestions...

Query: Madonna

Documents about: 1. Singer 2. Italian Renaissance 3. Religious Icon

Most Likely Meaning...

Result Documents

Language Independent Voice Analysis


Autonomys speech technology is language independent; it does not rely on vocabulary and grammatical rules, but derives understanding based purely on context. This allows the solution to develop a human-like understanding of the concepts spoken rather than by connecting specific sounds to specific words or meanings. With this functionality, Autonomys technology can determine meaning no matter what language is spoken, enabling both cross-lingual and multi-lingual analysis of audio information. In addition, Autonomys speech analytics tool intelligently recognizes accents and languages and automatically shifts the language model to the appropriate language in real-time. This is especially critical for companies that operate in global markets with multiple languages and dialects being served. Because the language model is self-learning, it can automatically add new terminology in any language to its vocabulary based on the context of the words being spoken. Autonomy supports speech recognition and analysis in more than 20 languages, including English, Spanish, Danish, French, German, Hungarian, Italian, Polish, Portuguese, Romanian, Russian, and Simplified Chinese.

Advanced Analytics
Autonomy delivers advanced analytic capabilities that extend far beyond keyword search functionality to uncover actionable information embedded in enterprise speech and audio assets, such as contact center interactions. Autonomys core technology, the Intelligent Data Operating Layer (IDOL) automatically processes audio and video data and exposes this intelligence to the entire enterprise through keyword and natural language search functionality, trend identification, cluster mapping, and other forms of advanced analysis. Using IDOL as the foundation for enterprise speech analytics, users can find matches to typed and spoken queries based on the main concepts and ideas that are present in data types with embedded audio information, even if different words and phrases are used to describe the same concepts. IDOLs conceptual search functionality additionally groups data with related meanings, automating many complex enterprise processes and simplifying information management.

Cluster Mapping

Trend Analysis

Automatic Query Guidance


Automatic Query Guidance (AQG) dynamically clusters results into relevant groups when a search is performed, suggesting further topics or information that are related to the initial query. Suggestions are provided automatically and in real-time to intelligently assist the end-user in navigating large amounts of data. Unlike other approaches, the Autonomy solution does not rely on intensive and subjective manual tagging in order to provide relevant information to the user.

Hot and Breaking Topics


One of the greatest challenges businesses face is the identification of emerging trends, such as customer behavior, operational issues, or competitive information. IDOLs Hot and Breaking feature automatically presents new and common topics as they are discussed without the end-user having to perform a search. Hot results represent topics from interactions that are high in volume, while Breaking results are identified by IDOL as new. This solution also enables the user to compare hot and breaking information to previously identified trends.

Clustering
Clustering is a unique feature that partitions information so that data with similar topics or concepts automatically clusters together without definition. This information is displayed in a two dimensional map, which allows the user to visualize the common themes that exist between interactions. Results are ranked by their conceptual similarity, which is essential to retrieving interactions most relevant to a query, even if they contain different key words.

Script Adherence
Script adherence functionality enables contact center, business, and compliance managers to automatically monitor voice interactions for a number of purposes. The application will compare any interaction whether it is conducted through voice, email, or chat to a model script and immediately alert managers to any significant deviation, enabling the immediate resolution of issues related to legal compliance, risk, fraud, or performance.

Trend Analysis
Trend analysis is crucial to identifying and responding to client, product, or operational issues that are discussed. By automatically grouping interactions with similar concepts, speech analytics can uncover emerging issues and automatically alert the business. This feature also identifies customer, market, and competitive trends over a specific amount of time, delivering timely information to departments such as sales, marketing, development, and customer service.

Sentiment Analysis
Sentiment analysis consists of speaker separation and the identification of heightened emotion and cross talk within an interaction, providing great detail to the business about the identity and emotional state of clients or customers. This feature works by displaying each speaker and areas of cross-talk in different colors in the media player when an interaction is played back. End-users can additionally search for interactions containing heightened emotion or filter a keyword search by whether they contain a certain degree of emotion.

Sentiment analysis is highly valuable to the business, as it aids in the understanding of customer attitudes, behaviors, expectations, and intentions. It also provides root cause analysis of interactions in which a customer was upset, angry, or confused, providing additional content for training and development.

Multi-Channel Interaction Analysis


In addition to speech information, IDOL technology can be applied to other electronic forms of communication such as chat and email. Chat and email interactions are ingested into IDOL and are analyzed and searched in the same manner as voice interactions. Because IDOL is an infrastructure platform, voice, email, and chat are processed in a single solution, enabling the business to obtain relevant intelligence from all forms of interactions.

About Autonomy
Autonomy Corporation plc (LSE: AU. or AU.L), a global leader in infrastructure software for the enterprise, spearheads the Meaning Based Computing movement. It was recently ranked by IDC as the clear leader in enterprise search revenues, with market share nearly double that of its nearest competitor. Autonomy's technology allows computers to harness the full richness of human information, forming a conceptual and contextual understanding of any piece of electronic data, including unstructured information, such as text, email, web pages, voice, or video. Autonomy's software powers the full spectrum of mission-critical enterprise applications including pan-enterprise search, customer interaction solutions, information governance, end-to-end eDiscovery, records management, archiving, business process management, web content management, web optimization, rich media management and video and audio analysis. Autonomy's customer base is comprised of more than 20,000 global companies, law firms and federal agencies including: AOL, BAE Systems, BBC, Bloomberg, Boeing, Citigroup, Coca Cola, Daimler AG, Deutsche Bank, DLA Piper, Ericsson, FedEx, Ford, GlaxoSmithKline, Lloyds TSB, NASA, Nestl, the New York Stock Exchange, Reuters, Shell, Tesco, T-Mobile, the U.S. Department of Energy, the U.S. Department of Homeland Security and the U.S. Securities and Exchange Commission. More than 400 companies OEM Autonomy technology, including Symantec, Citrix, HP, Novell, Oracle, Sybase and TIBCO.

For more information, please contact 1-800-835-6357

The information contained in this document represents the current opinion as of the date of publication of Autonomy Systems Ltd. regarding the issues discussed. Autonomy's opinion is based upon our review of competitor product information publicly available as of the date of this document. Because Autonomy must respond to changing market conditions, it should not be interpreted to be commitment on the part of Autonomy, and Autonomy cannot attest to the accuracy of any information presented after the date of publication. This document is for informational purposes only; Autonomy is not making warranties, express or implied, in this document.

(Autonomy Inc. and Autonomy Systems Limited are both subsidiaries of Autonomy Corporation plc.)

Você também pode gostar