Curo® Speech and Language Platform

The new wave of speech, language and multimodal interface solutions

Available today: the technology
for tomorrow's solutions

The devices, applications and services of tomorrow need the latest foundation. That's why Interactions offers speech, language and multimodal interface solutions that deliver unprecedented accuracy and performance.

With our Curo platform, you can take full advantage of automatic speech recognition, text to speech, voice biometrics and natural language processing. Years of applied research in machine learning and deep neural networks positions our technology at the forefront of the industry. Our service-oriented architecture provides a broad set of speech and language platform APIs for partner applications and third-party developers. All available via the cloud or on-premises.

ASR

Automatic Speech Recognition

Deliver natural and successful communication experiences for your customers.

Learn More

TTS

Text to Speech

Give voice to your content.

Learn More

VB

Voice Biometrics

Provide your password-weary customers with a fast and simple way to verify their identity.

Learn More

NLP

Natural Language Processing

Transform your unstructured speech and text into useful information.

Learn More

One powerful platform supports
multimodal technologies

Speech
Text
Gesture

The future: Decades in the making

In 2014, Interactions acquired the AT&T WatsonSM technology platform that's shaped the industry for decades. Now that we've integrated it into our own patented technology, Interactions can deliver even more accurate and conversational speech and multimodal solutions.

Here are just a few highlights showing the Watson platform's history of innovation:

2015 and beyond

Seamless communications, information access, personalized commerce and customer care are made possible with intelligent speech and language multimodal services. This includes technology to enable seamless, efficient communications for home automation, connected cars, entertainment services and consumer interaction for sales, service and support.

2000s

  • 2014

    Interactions acquires AT&T Watson speech and natural language platform to enable the next generation of efficient, multimodal applications, devices and services.

  • 2012

    Interactions' licensing partnership with AT&T integrates Watson speech technology into the company's award winning Adaptive UnderstandingTM platform.

  • 2004

    Interactions enters market with first fully conversational Virtual Assistant solutions that allow consumers to speak naturally.

  • 2002

    In 2002, the more sophisticated "How May I Help You?” service went live, handling two million calls per month.

  • A paradigm shift occurs in user experience and automation with the introduction of natural language offerings. "How May I Help You?" conversational technology for automated customer care is first introduced. In recent years, AT&T has increased the flexibility of speech recognizers through dynamic hierarchal statistical language modeling, a strategy that supports recognition of complex grammars. New algorithms based on deep neural nets have been added to more accurately model the sound of the human voice and reduce error rates. A wide range of software and process inventions has also been developed to increase speech recognition speed and precision.

1990s

  • Billions of dollars were saved when Large-Vocabulary Speech Services were introduced. The initial operator services system, Voice Recognition Call Processing (VRCP), was deployed in 1992 and saved hundreds of millions of dollars annually.

  • AT&T Labs pioneered machine learning and neural net technology through the 80s and 90s and, in 1996 deployed a check reader based on neural nets that processed 15% of all checks written in the United States. Also in the 90s, other AT&T Labs innovations such as finite state machine decoders and discriminative training provided performance gains, especially for large vocabularies. Today, we use these technologies to deliver significant accuracy improvements in real-time speech recognition.

1980s

AT&T pioneered the speech service industry by introducing the "Command & Control" approach. Simple commands like "Press or Say" dominated the customer service experience. In addition, barge-in capability was introduced, which allowed callers to interrupt the application and further expedite the customer service process.

In the beginning

Early stages of speech recognition shows promise for human-machine communication. Simple phrases are recognized and AT&T introduces multi-speaker capabilities enabling recognition of more complex interactions.

Ready for the next generation of Communication?

Contact Us