apiai
Base URI
Required unless sound file is provided. The natural language text to be processed. The request can have multiple query parameters. See Note above. This parameter is required unless sound file is provided
Required when multiple query parameters are used. The confidence of the corresponding query parameter having been correctly recognized by a speech recognition system. 0 represents no confidence and 1 represents the highest confidence. See Note above
Language tag from [HTTP/1.1 specification section 3.10] (http://tools.ietf.org/html/rfc2616#section-3.10).
It is a legal name. List of contexts for the query. See Contexts.
Time zone from IANA Time Zone Database.
Optional. Typically not used, since the agent is specified by the access token. The ID of the agent to use
Request was successful.
A resource is deprecated and will be removed in the future.
Some required parameter is missing or has the wrong value. Details will be in the errorDetails field.
URI is not valid or the resource ID does not correspond to an existing resource.
HTTP method not allowed, such as attempting to use a POST request with an endpoint that only accepts GET requests, or vice-versa.
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
###Authentication Each API request requires authentication that identifies the agent that is responsible for making the request. Authentication is provided through an access token and a subscription token.
There are two access tokens for each agent. The developer access token is used for managing entities and intents, and the client access token is used for making queries. The client access token may not be as secure because it may be stored as part of the app, and it may potentially be discovered. There is a way to regenerate the client key if it is compromised.
The subscription key is used for the Azure API management proxy. Unlike the access tokens, there is one subscription key per user that applies to all of the user’s agents.
###Obtaining the access tokens and subscription key To obtain the access token:
-
From the Agents dropdown, click Create new agent.  also acts as a namespace when referring to entities and intents.
API keys, which are used as access tokens, are unique per agent. By using an access token, you are specifying which agent’s entities and intents to use.
See Quick Start: User Interface on how to create a new agent with the user interface.
###Entities Entities represent concepts that are often specific to a domain as a way of mapping natural language phrases to canonical phrases that capture their meaning. For example, for a music app, you might want an entity for music genres, and for a weather app, you might want an entity for popular cities.
In natural language, you can often have many ways to say the same thing. For this reason, each entity has a list of entries, where each entry contains a mapping between a group of synonyms and a reference value. For example, a music genre entity could have an entry with a reference value of “Rock” with synonyms of “Rock” and “Rock and Roll”.
There are three types of entities available:
- System entities, which cover common concepts, such as numbers and dates.
- Developer-defined entities, which allow you to create your own entities for your domain.
- In-line entities, which are a quick way to use synonyms without much overhead.
When referring to an entity, prefix it with an @
, such as @entity
.
Note: A future feature will be the list entity, which defines a list of domain concepts, such as a list of artists for a music app.
See the Entity Overview for more information on entity types, syntax, and a list of what system entities are available. See the entities reference page for more information on the entity requests.
Note: The API.AI user interface allows you to create entities through web forms, but also allows you to upload them in JSON, XML, and CSV formats.
###Intents An intent represents a mapping between what a user says and what action should be taken by your software.
Intents have four main elements:
- Templates. One or more user expressions, which could be patterns or examples of user requests. Templates can contain entities with aliases.
- Action. Each intent contains an action that consists of an action name and multiple parameters. Each parameter has a name and value, where the value is determined by the user expression.
- Input contexts. A set of contexts that must be set as a pre-requisite for the intent to be executed.
- Output contexts. A set of contexts that are set once the intent is executed. The contexts are optional.
For example, for a weather app, you might want to have an intent where the user asks what is the weather in a particular city. You would already have set up an entity called @city with a list of common cities. The intent could have two templates that represent two phrases for asking for the weather in a city:
- What is the weather in @city:location
- Forecast for @city:location
Note that the alias for the city is “location”.
Next, the intent would need an action, which would have a name such as “weatherForecast”. The parameter would be the city where we want the weather forecast, so there could be a parameter of name “cityLocation” with a value of $location
, which is the alias that was defined in the template.
Now the agent can process user expressions such as:
- “What is the weather in New York”
- “Forecast for Washington”
The response that would be returned for the query “What is the weather in New York” is something like:
{
"id": "efc48c...",
"result": {
"resolvedQuery": "What is the weather in New York",
"speech": "",
"action": "weatherForecast",
"parameters":{
"cityLocation": "New York, NY"
},
}
}
Using this information, your software can return a weather forecast for a well-defined location such as “New York, NY”.
If the user followed the phrase “What is the weather in New York” with “What about Washington”, you want your software to process this as a request for more weather information. To accomplish this, you would:
- Add an output context of “weather” to your intent.
- Create a new intent with a template of “What about @city:location” and an input context of “weather”. This second intent would only be matched if the context was previously set to “weather”.
See Entity Overview for more information on the relationship between entities and intents, and see Contexts for more information on contexts.
Note: The API.AI user interface allows you to create intents through web forms, but also allows you to upload them in JSON, XML, and CSV formats.
###Contexts Contexts are strings that represent the current context of the user expression. This is useful for differentiating phrases which might be vague and have different meaning depending on what was spoken previously. For example, you might have an app for turning on and off appliances remotely. A user could say, “Turn on the front door light”, followed by “Turn it off”. In another situation, the user could say, “Turn on the coffee machine” followed “Turn it off”, and the phrase “Turn it off” should result in a different action because the context is different.
If a user expression is matched to an intent, then the intent can set an output context so that future expressions are expected to share the same context. In our appliance app example, saying “Turn on the front door light” should set an output context to “front door light”. Then, there should be several intents with templates “Turn it off”, each with a different input context. If the context was set to “front door light”, then the intent with the “Turn it off” that has an input context of “front door light” will be executed, but the others will not.
Note: Contexts expire after 5 requests or after 5 minutes from the time they were set. Intents that renew the context will reset the context clock and counter to give an additional 5 requests and 5 minutes.
See Quick Start: Contexts for an example of how to set up and use contexts.
###Aliases When an entity is used in an expression, it will also have an alias, which acts like a variable name so that it can be referenced later. When defined, the entity and alias take this form:
@entity:alias
When it is referenced later, the alias takes this form:
$alias
Aliases are required because you might have an expression that uses an entity more than once. For example, you might have an intent that represents the time to travel from one city to another. The intent could contain templates such as “how long does it take to go from @city:fromCity to @city:toCity”, so that the first city can be referred to by $fromCity and the second city can be referred to by $toCity.
###Legal Names In some cases, names of data objects have certain restrictions on them. These data type is referred to as a “Legal Name”. Legal Names are strings that consists of these characters only:
Upper case Latin characters (A-Z)
Lower case Latin characters (a-z)
Digits (0-9)
Special characters: _
and -
##User Interface Quick Start
The API.AI user interface is a web portal that lets you create agents, entities, and intents, as well as try out queries. (See Key Concepts for information on what these are). In this Quick Start, we will lead you through a simple example of processing natural language using the API.AI user interface.
In this example, we want to be able to take natural language text and return weather forecast data for a city. Of course, it will be up to the app to get the actual forecast data, but the API.AI can take a variety of different phrases and translate them into structured data where there is a clear action (forecast) with clear parameters (the city).
###Step 1. Create an agent. The agent corresponds to an application, and it contains the entities and intents.
From the API.AI homepage, click on Create Agent.
Click on NewAgent and type in “QuickStartAgent” for the agent name. Click Save & close.
###Step 2. Create an entity for cities. Next, you are going to create an entity for the concept of cities. Because cities can be referred to by different names, we need to create a list of synonyms for each city.
First make sure that your current agent is QuickStartAgent. If not, choose it from the dropdown menu.
From the top menu, choose Create Entity.
Click on New_Entity and type “city”. Then click on Enter reference value… and type “New York, NY”. Click on Enter synonym… and type “New York”. Add two more synonyms: “NY” and “Big Apple”. Similarly, add another row with “Washington, DC” as the reference value and “Washington”, “DC”, and “Washington, DC” as the synonyms. It should look like this:
Click Save & close at the bottom.
###Step 3. Create an intent for weather forecast. Now you will create an intent so that when the user expression asks for weather in a city, it will be mapped to an action for weather forecasting in that city.
Click on Intents at the top of the page.
Click Create an intent.
Click on New Intent and type “What is the weather in @city”. Under User says, click on Add user expression… and type “What is the weather in @city:location”. Click + Add and add another user expression called “Forecast for @city:location”.
Under Action, click Enter action name… and type “weatherForecast”. Under PARAMETER NAME, click Enter name… and type “cityLocation”. Click on Enter value… and type “$location”. It should look like this:
Note that the we are using the entity named “city” by referring to it as @city
. The :location
that follows it is the alias so that we can refer to its value in the parameters as $location
.
This intent will look for phrases that start with “What is the weather in…” and “Forecast for…” and that end with any of the synonyms for New York or Washington, DC. If the phrase matches, then it will return an action of name “weatherForecast” with a parameter of name “cityLocation” with the reference value of the city (“New York, NY” or “Washington, DC”).
Click Save & close at the bottom.
###Step 4. Make a query
In the top right, there is a textbox that says Try it now… Type in “Forecast for DC” and Enter.
After a moment, you will see the results. The intent “What is the weather in @city” was matched, and the action it returned was “weatherForecast” with a parameter of “cityLocation” with a value of “Washington DC”. So the natural language phrase “Forecast for DC” was translated into an action with a parameter that your software can easily process.
Try other phrases, such as “what is the weather in the big apple”.
###Next steps This Quick Start showed you how to use the user interface to try out API.AI and learn the concepts of agents, entities, and intents. Next, the REST API Quick Start show you how to do the same tasks using the REST API.
###Entities
An entity is a data type that defines a mapping between a set of synonyms (that is, ways a particular concept could be expressed in natural language) and a reference (that is, canonical) value that will be used by your software.
Each entity has a name and one or more values. The names are unique for each agent and can contain Latin letters ([a-zA-Z]), numeric characters ([0-9]), and the symbols “_” and “-“. (These are also referred to as Legal Names.) Each value is a mapping between a set of strings that represents synonyms and the reference value.
For example, you could create an entity with the name “city” that has a value for each major city and a list of synonyms to describe it. The following image shows what this entity could look like in the API.AI user interface.
###Aliases
When an entity is used in an expression, it will also have an alias, which acts like a variable name so that it can be referenced later. In the entity definition, the alias takes the form @entity:alias
, and when it is referenced later, it takes the form $alias
.
See Aliases for more information. ###Entity Types API.AI supports the following types of entities:
- System entities. Entities that are provided by the API to handle common natural language synonyms. For example, Entity Types API.AI supports the following types of entities
@sys.date
provides matches for common date references such as “January 1, 2015”. See System Entities for a complete list. - Inline entities. As part of an entity or intent, the expression
@{a, b, c, ...}
indicates that a, b, c, etc. are all synonyms. For example, the expression@{bug, defect} report
matches both “bug report” and “defect report”. - Developer-defined entities. You can create your own entities, either through the API.AI user interface or through the API. An example is the “city” entity, shown above.
Note: All three types of entities can have aliases.
Note: System and developer-defined entities can be used in expressions in both other entities and aliases.
###Entity and Alias Syntax Use the following syntax when referencing entities:
|Type| |Format| |Examples|
|--------| |----------| |------------|
|Fully qualified name| |@agent.entity:alias
| |@sys.number:aisleNumber
could be used to represent an aisle number pandora.artist:selectedArtist
could be used to represent an artist. In this case, “pandora” is the agent name.|
|Local name| |@entity:alias
| |artist:selectedArtist
is the same as above, for the current agent.|
Note: System entities must always be referenced with fully qualified name, starting with sys..
For aliases, you can extract either the reference value or the original value, as shown in the following table:
|Type| |Format|
|----| |------|
|Reference value| |$alias
|
|Original value| |$alias.original
|
For example, you may have a user expression: “Weather forecast for @city:location” and the @city entity contains a value where “New York” and “Big Apple” have a reference value of “New York, NY”.
If the user input is “Weather forecast for Big Apple”, then:
$location
has a value of New York, NY$location.original
has a value of Big Apple
###Using Entities in Entity Definitions When creating developer-defined entities, you can use previously-defined entities. These include:
- Inline entities
- System entities
- Other developer-defined entities Note: You cannot use aliases when defining entity values.
###Inline entities
Synonyms can be defined in-line when creating entity values. In the following example, @{defect, bug} report
is used as a shorter way of saying that both “defect report” and “bug report” are synonyms with a reference value of “defect”.
###System and developer-defined entities
You can also use system entities and user-defined entities in synonym definitions. In the following example, any user expression that starts with “aisle” and is followed by a number will be mapped to the reference value “aisle”.
**Note: ** With this type of implementation, the value of the number cannot be accessed. Future features will allow the creation of more complex entities where the value of system and developer-defined entities can be accessed.
###System Entities api.ai has a number of pre-defined / system entities that describe common concepts, such as numbers, dates, etc.
|Entity Group| |Entity name| |Description| |Examples| |Returned object structure|
|----------------| |---------------| |---------------| |------------| |-----------------------------|
|Generic| |@sys.any| |Matches any non-empty input| |“Find a restaurant in @sys.any:location”| |String of the user input, such as {"location":"New York City"}
|
| | |@sys.void| |Matches an empty string. Used to specify optional phrases| |“Create a @{bug, system.void} report” will match “Create a bug report” and “Create a report”.| |Does not return a value|
|Numbers| |@sys.number| |Matches a number| | “1” “two hundred thirty”| |Integer, such as {"num":10| |**Date and Time**| |@sys.date| |Matches a date. Both absolute and relative dates are supported.| |"January 1" "Tomorrow" "January first"| |Date in ISO-8601 format. For example,
{“date”:“2014-12-31”}| | | |@sys.date-time| |Matches a date and time.| |"Tomorrow at 4 pm" "On January 1 at 12 pm"| |Date/time in ISO-8601 format, including time zone. For example,
{“dateTime”:“2014-08-09T22:45:29+00:00”}| | | |@sys.date-period| |Matches a date interval.| |"April" "weekend" "from January 1 till January 15" "in 2 days"| |Date period in ISO-8601 format. For example,
{“period”:“2014-01-01/2014-12-31”}| | | |@sys.time| |Matches a time.| |"1 pm" "20:30" "half past four"| |Time in ISO-8601 format (hh:mm:ss). For example,
{“time”:“13:30:00”}. **Note: ** Does not include time zone data.| | | |@sys.time-period| |Matches a time interval.| |"afternoon" "tonight" "from 1 pm till 3:30 pm" "in 2 minutes"| |Time period in ISO-8601 format (hh:mm:ss). For example,
{“time”:“13:30:00/14:30:00”}`. **Note: ** Does not include time zone data.|
|Actions| |@sys.lock-unlock| |Words describing actions to lock / unlock (e.g. for a smart home door lock)| |“lock” “unlock”| |String = “lock” / “unlock”|
| | |@sys.next-previous| |Words describing “next/previous” actions| |“next” “previous”| |String = “next” / “previous”|
| | |@sys.on-off| |Words describing “on/off” actions| |“on” “turn off”| |String = “on” / “off”|
| | |@sys.play-pause| |Words describing “play/pause” actions| |“play” “pause”| |String = “play” / “pause”|
| | |@sys.start-stop| |Words describing “start/stop” actions| | “start” “stop”| |String = “start” / “stop”|
|Color| |@sys.color| |Words describing colors| |“green” “magenta”| |String with corresponding color|
|Names| |@sys.given-name| |Common given names | |“John” “Mary”| |String with corresponding given name|
|Phones| |@sys.phone-number| |Phone number| |"(123) 456 7890" “+1 (123) 456-7890”| |Phone number without punctuation and spaces, e.g. “11234567890”|
|Email| |@sys.email| |email| |user@example.com| |Email address as string|
###API quick start
In this Quick Start, you will return to the intent that you created in the User Interface Quick Start and add context information.
The meanings of some natural language expressions are vague when taken by themselves, but have meaning when placed in context. API.AI allows you to set a context when an intent is matched and also create intents that will only be matched when certain contexts have been previously set.
In the User Interface Quick Start, you created an intent for weather forecasts for cities. The expressions took the form “What is the weather in @city” and “Forecast for @city”. Let’s say that a user asked “What is the weather in New York?” and then followed it up with “What about DC?”. We want to perform the same weather forecast action, but only because we know that the context is about weather. In order to do this, we must set the appropriate contexts to “weather”.
See Contexts for more information on how contexts work.
###Step 1. Set a weather context. Click on the Intents icon at the top of the page and choose What is the weather in @city.
Click on Define contexts.
Click on add output context… and type “weather”.
Click on Save & close.
Now, once this intent is matched, the context will be set to “weather”. This context will expire after 5 queries or 5 minutes, unless the context is set again.
###Step 2. Create a new intent for “What about $city”. Next, you are going to create an intent to handle the phrase “What about $city”.
You should be on the Intents screen. Click on Create Intent.
Click on New Intent and change it to “What about $city:location”. Under User says, click on Add user expression and change it to “What about $city”. Under Action, click on Enter action name… and change it to “weatherForecast”. Under PARAMETER NAME, click on Enter name… and change it to “cityLocation”. Click on Enter value… and change it to “$location”. It should look like this:
This intent will look for phrases that start with “What about…” and “Forecast for…” and that end with any of the synonyms for New York or Washington, DC. If the phrase matches, then it will return an action of name “weatherForecast” with a parameter of name “cityLocation” with the reference value of the city (“New York, NY” or “Washington, DC”).
However, we only want to match this intent if the context has been set to “weather”. Click on Define contexts. Click on add input context… and type “weather”. We also want to keep the context as “weather” in case they ask again, so click on add output context… and type “weather”. It should look like this:
Click Save & close.
###Step 3. Run the queries.
We can use the user interface to try out the queries. In the Try it now… text box, type “What about Washington”. Because the context was not set, there will be no matches.
Type “What is the weather in New York” and you will see it matched to a weatherForecast action. Now type “What about Washington”. This time, because the context was set, you will see the weatherForecast action.
###Overview
The API.AI iOS SDK makes it easy to integrate speech recognition with API.AI natural language processing API on iOS devices. API.AI allows using voice commands and integration with dialog scenarios defined for a particular agent in API.AI.
###Prerequsites
- Create an API.AI account
- Install CocoaPods
###Running the Demo app
- Run
pod update
in the ApiAiDemo project folder. - Open ApiAIDemo.xworkspace in Xcode.
- In ViewController -viewDidLoad insert API key & subscription.
configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN";
configuration.subscriptionKey = @"YOUR_SUBSCRIPTION_KEY";
Note: an agent in api.ai should exist. Keys could be obtained on the agent’s settings page.
- Define sample intents in the agent.
- Run the app in Xcode. Inputs are possible with text and voice (experimental).
###Integrating into your app
####1. Initialize CocoaPods
- Run
pod install
in your project folder. - Update Podfile to include:
pod 'ApiAI'
- Run
pod update
####2. Init the SDK.
In the AppDelegate.h
, add ApiAI.h import and property:
#import <ApiAI/ApiAI.h>
@property(nonatomic, strong) ApiAI *apiAI;
In the AppDelegate.m, add
self.apiAI = [[ApiAI alloc] init];
// Define API.AI configuration here.
Configuration *configuration = [[Configuration alloc] init];
configuration.baseURL = [NSURL URLWithString:@"https://api.api.ai/v1"];
configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN_HERE";
configuration.subscriptionKey = @"YOUR_SUBSCRIPTION_KEY_HERE";
self.apiAI.configuration = configuration;
```http
####3. Perform request using text.
```http
...
// Request using text (assumes that speech recognition / ASR is done using a third-party library, e.g. AT&T)
AITextRequest *request = (AITextRequest *)[_apiAI requestWithType:AIRequestTypeText];
request.query = @[@"hello"];
[request setCompletionBlockSuccess:^(OPRequest *request, id response) {
// Handle success ...
} failure:^(OPRequest *request, NSError *error) {
// Handle error ...
}];
[_openAPI enqueue:request];
####4. Or perform request using voice:
// Request using voice
AIVoiceRequest *request = (AIVoiceRequest *)[_apiAI requestWithType:AIRequestTypeVoice];
[request setCompletionBlockSuccess:^(AIRequest *request, id response) {
// Handle success ...
} failure:^(AIRequest *request, NSError *error) {
// Handle error ...
}];
self.voiceRequest = request;
[_apiAI enqueue:request];
##api-ai-android-sdk The API.AI Android SDK makes it easy to integrate speech recognition with API.AI natural language processing API on Android devices. API.AI allows using voice commands and integration with dialog scenarios defined for a particular agent in API.AI.
Two permissions are required to use the API.AI Android SDK:
- android.permission.INTERNET for internet access
- android.permission.RECORD_AUDIO for microphone access
Currently, speech recognition is performed using Google’s Android SDK, either on the client device or in the cloud. Recognized text is passed to the API.AI through HTTP requests. In the future, your client app will be able to use the SDK to send an audio file or stream to the API.AI server so that it can be processed there.
Authentication is accomplished through setting the client access token when initializing an AIConfiguration object. The client access token specifies which agent will be used for natural language processing.
Note: The API.AI Android SDK only makes query requests, and cannot be used to manage entities and intents. Instead, use the API.AI user interface or REST API to create, retreive, update, and delete entities and intents.
###Running the Sample Code The API.AI Android SDK comes with a simple sample that illustrates how voice commands can be integrated with API.AI. Use the following steps to run the sample code:
- Have an API.AI agent created that has entities and intents. See the API.AI documentation on how to do this.
- Open Android Studio.
- Import the api-ai-android-master directory.
- Open the SDK Manager and be sure that you have installed Android Build Tools 19.1.
- In the Project browser, open apiAISampleApp/src/main/java/ai.api.sample/MainActivity.
- Towards the top of the file, you will see a declaration of a static final string called ACCESS_TOKEN. Set its value to be the client access token of your agent. Similarly, set the variable named SUBSCRIPTION_KEY to your subscription key.
- Attach an Android device, or have the emulator set up with an emulated device.
- From the Run menu, choose Debug (or click the Debug symbol). Choose your device.
- You should see an app running with three buttons: Listen, StopListen, and Cancel.
- Click Listen and say a phrase that will be understood by your agent. Wait a few seconds. The Java will appear that is returned by the API.AI service.
###Getting Started with Your Own App This section describes what you need to do to get started with your own app that uses the API.AI Android SDK. The first part provides an overview of how to use the SDK, and the second part is a tutorial with detailed step-by-step instructions for creating your own app.
####Overview
To create your own app, you must first add the API.AI SDK library to your project. There are two ways to accomplish this. The first way is simpler.
- Add a dependency to your build.gradle file. Add the following line to your build.gradle file. (In the sample app, the apiAISampleApp/build.gradle is an example of how to do this.) compile ‘ai.api:sdk:1.1.0’
- Download the library source code from github, and attach it to your project.
Now you can create your own app, using either integrated speech recognition or using your own speech recognition.
####Using integrated speech recognition
Once you’ve added the SDK library, follow these steps:
- Add two permissions into the AndroidManifest:
- android.permission.INTERNET
- android.permission.RECORD_AUDIO
- Create a class that implements the AIListener interface. This class will process responses from API.AI.
- Create an instance of AIConfiguration, specifying the access token, locale, and recognition engine.
- Use the AIConfiguration object to get a reference to the AIService, which will make the query requests.
- Set the AIListener instance for the AIService instance.
- Launch listening from the microphone via the startListening method. The SDK will start listening for the microphone input of the mobile device.
- To stop listening and start the request to the API.AI service using the current recognition results, call the stopListening method of the AIService class.
- To cancel the listening process without sending a request to the API.AI service, call the cancel method of the AIService class.
- In the onResult method of the AIListener interface, check the response for errors using the AIResponse.isError method.
- If there are no errors, you can get the result using the AIResponse.getResult method. From there, you can obtain the action and parameters.
####Using your own speech recognition
This section assumes that you have performed your own speech recognition and that you have text that you want to process as natural language. Once you’ve added the SDK library, follow these steps:
- Add this permission into the AndroidManifest:
- android.permission.INTERNET
- Create an instance of AIConfiguration, specifying the access token, locale, and recognition engine. You can specify any recognition engine, since that value will not be used.
- Create an AIDataService instance using the configuration object.
- Create the empty AIRequest instance. Set the request text using the method setQuery.
- Send the request to the API.AI service using the method aiDataService.request(aiRequest).
- Process the response.
The following example code sends a query with the text “Hello”:
final AIConfiguration config = new AIConfiguration(ACCESS_TOKEN, SUBSCRIPTION_KEY,
Locale.US.toString(), AIConfiguration.RecognitionEngine.Google);
final AIDataService aiDataService = new AIDataService(config);
final AIRequest aiRequest = new AIRequest();
aiRequest.setQuery("Hello");
try {
final AIResponse aiResponse = aiDataService.request(aiRequest);
// process response object here...
} catch (final AIServiceException e) {
e.printStackTrace();
}
###Tutorial
This section contains a detailed tutorial about creating new app and connect it to API.AI.
####Create a new app
Follow these steps to set up your environment and create new android app with API.AI integration:
- Create an API.AI agent with entities and intents, or use one that you’ve already created. See the API.AI documentation for instructions on how to do this.
- Open Android Studio. (Download it if you don’t have it.)
- From the start screen (or File menu) , choose New Project…
- In the New Project dialog, fill Application name and Company Domain, then click Next.
- Choose minimum SDK for project, minimum supported by API.AI SDK is 9 Gingerbread. Click Next.
- Select Blank Activity and click Next.
- Enter the main activity name and click Finish.
####Integrate with the SDK
Next you will integrate with the SDK to be able to make calls. Follow these steps:
- Open AndroidManifest.xml under app/src/main.
- Just above the
<activity>
tag, add these line in order to give the app permission to access the internet and the microphone:
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.RECORD_AUDIO" />
- Save AndroidManifest.xml.
- Next, you need to add a new dependency for the AI.API library. Right click on your module name (it should be app) in the Project Navigator and select Open Module Settings. Click on the Dependencies tab. Click on the + sign on the bottom left side and select Library dependency.
- In the opened dialog search ai.api, choose ai.api:sdk:1.1.0 item then click OK.
- Open MainActivity.java under app/src/main/java/com.example.yourAppName.app, or whatever your package name is.
- Expand the import section and add the following lines to import the necessary API.AI classes:
import ai.api.AIConfiguration;
import ai.api.AIListener;
import ai.api.AIService;
import ai.api.GsonFactory;
import ai.api.model.AIError;
import ai.api.model.AIResponse;
import ai.api.model.Result;
####Create the user interface
- Open activity_main.xml under app/src/main/res/layout. This will open the layout in the designer.
- Select and delete the “Hello World” TextView.
- Drag a Button (under Widgets) to the top of the screen. Change the id property to “listenButton” and the text property to “Listen”.
- Drag a Plain TextView (under Widgets) under the button. Expand it so that it covers the rest of the bottom of the screen. Change the id property to “resultTextView” and the text property to an empty string.
- Now return to the MainActivity.java file. Add three import statements to access our widgets:
import android.view.View;
import android.widget.Button;
import android.widget.TextView;
- Create two private members in MainActivity for the widgets:
private Button processButton;
private TextView resultTextView;
- At the end of the OnCreate method, add these lines to initialize the widgets:
processButton = (Button) findViewById(R.id.processButton);
resultTextView = (TextView) findViewById(R.id.resultTextView);
####Create the AI Service and Listener
- Use the MainActivity as the class that will be called when events occur by having it implement the AIListener class. Replace the class declaration with this:
public class MainActivity extends ActionBarActivity implements AIListener {
- In the MainActivity class, create a private member for the AIService class named
aiService
.
private AIService aiService;
- In the OnCreate method, add the following line to set up the configuration to use Google speech recognition. Replace CLIENT_ACCESS_TOKEN and SUBSCRIPTION KEY with your client access token and subscription key. When it asks to add
import java.util.Locale
, say OK.
final AIConfiguration config = new AIConfiguration("CLIENT_ACCESS_TOKEN",
"SUBSCRIPTION_KEY", Locale.US.toString(),
AIConfiguration.RecognitionEngine.Google);
4. Below this line, initialize the AI service and add this instance as the listener to handle events.
aiService = AIService.getService(this, config);
aiService.setListener(this);
- Add method to start listening on the button click:
public void listenButtonOnClick(final View view) {
aiService.startListening();
}
- Return to activity_main.xml and click on the Listen button. In the properties pane, set the onClick property to listenButtonOnClick.
- Add the following method to show the results when the listening is complete:
public void onResult(final AIResponse response) {
if (response.isError()) {
resultTextView.setText("Error: " + response.getStatus().getErrorDetails());
} else {
Result result = response.getResult();
// Get parameters
String parameterString = "";
if (result.getParameters() != null && !result.getParameters().isEmpty()) {
for (final Map.Entry<String, JsonElement> entry : result.getParameters().entrySet()) {
parameterString += "(" + entry.getKey() + ", " + entry.getValue() + ") ";
}
}
// Show results in TextView.
resultTextView.setText("Query:" + result.getResolvedQuery() +
"\nAction: " + result.getAction() +
"\nParameters: " + parameterString);
}
}
- Add the following method to handle errors:
@Override
public void onError(final AIError error) {
resultTextView.setText(error.toString());
}
- Add the following empty methods to implement the AIListener interface:
@Override
public void onListeningStarted() {}
@Override
public void onListeningFinished() {}
@Override
public void onAudioLevel(final float level) {}
####Run the App
- Attach an Android device to your computer or have a virtual device ready.
- Make sure that your module is selected in the dropdown, and then click the Debug button.
- The app should now be running on your device or virtual device. Click the Listen button and then speak a phrase that will work with your intent. Wait a few seconds. The result should appear in the result TextView.
####Troubleshooting
- If you get an error when trying to install app that says “INSTALL_FAILED_OLDER_SDK”, then check you have Android SDK 19 and build tools 19.1 installed.
<html>
<head>
<title>API Example</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script type="text/javascript">
var accessToken = "<your agent access token>";
var subscriptionKey = "<your agent subscription key>";
var baseUrl = "https://api.api.ai/v1/";
$(document).ready(function() {
$("#input").keypress(function(event) {
if (event.which == 13) {
event.preventDefault();
send();
}
});
$("#rec").click(function(event) {
switchRecognition();
});
});
var recognition;
function startRecognition() {
recognition = new webkitSpeechRecognition();
recognition.onstart = function(event) {
updateRec();
};
recognition.onresult = function(event) {
var text = "";
for (var i = event.resultIndex; i < event.results.length; ++i) {
text += event.results[i][0].transcript;
}
setInput(text);
stopRecognition();
};
recognition.onend = function() {
stopRecognition();
};
recognition.lang = "en-US";
recognition.start();
}
function stopRecognition() {
if (recognition) {
recognition.stop();
recognition = null;
}
updateRec();
}
function switchRecognition() {
if (recognition) {
stopRecognition();
} else {
startRecognition();
}
}
function setInput(text) {
$("#input").val(text);
send();
}
function updateRec() {
$("#rec").text(recognition ? "Stop" : "Speak");
}
function send() {
var text = $("#input").val();
$.ajax({
type: "POST",
url: baseUrl + "query/",
contentType: "application/json; charset=utf-8",
dataType: "json",
headers: {
"Authorization": "Bearer " + accessToken,
"ocp-apim-subscription-key": subscriptionKey
},
data: JSON.stringify({ q: text, lang: "en" }),
success: function(data) {
setResponse(JSON.stringify(data, undefined, 2));
},
error: function() {
setResponse("Internal Server Error");
}
});
setResponse("Loading...");
}
function setResponse(val) {
$("#response").text(val);
}
</script>
<style type="text/css">
body { width: 500px; margin: 0 auto; text-align: center; margin-top: 20px; }
div { position: absolute; }
input { width: 400px; }
button { width: 50px; }
textarea { width: 100%; }
</style>
</head>
<body>
<div>
<input id="input" type="text"> <button id="rec">Speak</button>
<br>Response<br> <textarea id="response" cols="40" rows="20"></textarea>
</div>
</body>
</html>
Plugin ID
ai.api.apiaiplugin
Description
Plugin makes it easy to integrate your Cordova application with http://api.ai natural language processing service.
Maintainers
apiai
Platforms
ios, android
Keywords
[language processing](http://plugins.cordova.io/#/search?search=language processing), [voice recognition](http://plugins.cordova.io/#/search?search=voice recognition)
Install Plugin makes it easy to integrate your Cordova application with http://api.ai natural language processing service. using the Cordova CLI: cordova plugin add ai.api.apiaiplugin
Read Me
###api-ai-cordova
Plugin makes it easy to integrate your Cordova application with api.ai natural language processing service. This plugin supports Android and iOS mobile operation systems.
Project on Github https://github.com/api-ai/api-ai-cordova
Page in Cordova Plugins Registry http://plugins.cordova.io/#/package/ai.api.apiaiplugin
Github issues https://github.com/api-ai/api-ai-cordova/issues
Demo application sources https://github.com/api-ai/api-ai-cordova-sample
###Installation
- Make sure that Cordova CLI is installed
- Install api.ai plugin with Cordova CLI:
cordova plugin add ai.api.apiaiplugin
###Usage
Add to your index.js file (typically in js folder) in function onDeviceReady following code
ApiAIPlugin.init("YOUR_SUBSCRIPTION_KEY", "YOUR_CLIENT_ACCESS_TOKEN",
function(result) { /* success processing */ },
function(error) { /* error processing */ }
);
Add to your page with mic button function to make voice requests:
function sendVoice() {
try {
ApiAIPlugin.requestVoice(
{
lang:"en"
},
function (response) {
// place your result processing here
alert(JSON.stringify(response));
},
function (error) {
// place your error processing here
alert(error);
});
} catch (e) {
alert(e);
}
}
And call it from your button’s onclick:
Mic
If you want make text requests add the following code:
function sendText(query_text) {
try {
ApiAIPlugin.requestText(
{
query: query_text
},
function (response) {
// place your result processing here
alert(JSON.stringify(response));
},
function (error) {
// place your error processing here
alert(error);
});
} catch (e) {
alert(e);
}
}
If you want to create voice level visualization use function levelMeterCallback to set callback for processing soundLevel:
ApiAIPlugin.levelMeterCallback(function(level) {
console.log(level);
});
Also you can use function to cancel current api.ai request:
ApiAIPlugin.cancelAllRequests();
###API
// Initialize plugin
// clientAccessToken - String - client access token from your developer console
// subscriptionKey - String - subscription key from your developer console
// success - Function (optional) - callback for initialization success
// error - Function (optional) - callback for initialization error
ApiAIPlugin.init(clientAccessToken, subscriptionKey, success, error)
// Start listening, then make voice request to api.ai service
// options - JSON object - voice request options, now should be `{ lang: "en" }`
// success - Function (optional) - callback for request success
// error - Function (optional) - callback for request error
ApiAIPlugin.requestVoice(options, success, error)
// Make text request to api.ai service
// options - JSON object - `{ query: "queryText" }`
// success - Function (optional) - callback for request success
// error - Function (optional) - callback for request error
ApiAIPlugin.requestText(options, success, error)
// Set callback for sound level. Need to call only once after initialization
// callback - Function - function must be `function(level) { }`, level is float value from 0 to 100
ApiAIPlugin.levelMeterCallback(callback)
// Cancel all pending requests
ApiAIPlugin.cancelAllRequests()
###Status and Error Codes
The following table describes status and error codes returned by API.AI.
In the status object, the code field contains the status code and the errorType field contains the error type.
|Status Code| |Error Type| |Description| |-----------| |----------| |------------| |200| |success| |Request was successful.| |200| |deprecated| |A resource is deprecated and will be removed in the future.| |400| |bad_request| |Some required parameter is missing or has the wrong value. Details will be in the errorDetails field.| |404| |not_found| |URI is not valid or the resource ID does not correspond to an existing resource.| |405| |not_allowed| |HTTP method not allowed, such as attempting to use a POST request with an endpoint that only accepts GET requests, or vice-versa.| |409| |conflict| |The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.|
The query endpoint is used to process natural language, either in the form of text or a sound file. The query requests return structured data in JSON format with an action and parameters for that action.
Takes natural language text and information as query parameters and returns information as JSON.
Request parameters
Required unless sound file is provided. The natural language text to be processed. The request can have multiple query parameters. See Note above. This parameter is required unless sound file is provided
Required when multiple query parameters are used. The confidence of the corresponding query parameter having been correctly recognized by a speech recognition system. 0 represents no confidence and 1 represents the highest confidence. See Note above
Language tag from [HTTP/1.1 specification section 3.10] (http://tools.ietf.org/html/rfc2616#section-3.10).
Request headers
Responses
Body
Unique identifier of the result.
Date and time of the request in UTC timezone using ISO-8601 format.
Contains the results of the natual language processing.
The query that was used to produce this result.
Deprecated
An action to take.
Parameters to be used by the action.
device : computer
Contains data on intents and contexts.
ID of the intent that produced this result.
Name of the intent that produced this result.
Contexts that were matched by the intent.
Contexts that were added by the intent.
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Takes natural language text as a sound file and returns information as JSON.
Currently, the sound files must be 16000 Hz, Signed PCM, 16 bit, and mono. The following sound file formats are accepted:
|Format| |type value| |----------| |--------------| |WAV| |type=audio/wav| |Raw audio (headerless)| |type=audio/x-raw|
Request parameters
Data for processing. Content type application/json.
The audio format. See table above. Binary audio voice data
Required unless sound file is provided. The natural language text to be processed. The request can have multiple query parameters. See Note above. This parameter is required unless sound file is provided
Required when multiple query parameters are used. The confidence of the corresponding query parameter having been correctly recognized by a speech recognition system. 0 represents no confidence and 1 represents the highest confidence. See Note above
Language tag from [HTTP/1.1 specification section 3.10] (http://tools.ietf.org/html/rfc2616#section-3.10).
Request headers
Responses
Body
Unique identifier of the result.
Date and time of the request in UTC timezone using ISO-8601 format.
Contains the results of the natual language processing
The query that was used to produce this result.
Deprecated
An action to take.
Parameters to be used by the action.
device : computer
Contains data on intents and contexts.
ID of the intent that produced this result.
Name of the intent that produced this result.
Contexts that were matched by the intent.
Contexts that were added by the intent.
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
POST https://api.api.ai/v1/query?request="timezone" : "America/New_York"&voiceData=The binary voice data from hello.wav HTTP/1.1
Content-Type: application/json
{
"timezone" : "America/New_York",
"voiceData": "<The binary voice data from hello.wav>"
}
Takes natural language text and information as JSON in the POST body and returns information as JSON.
Request parameters
Required unless sound file is provided. The natural language text to be processed. The request can have multiple query parameters. See Note above. This parameter is required unless sound file is provided
Required when multiple query parameters are used. The confidence of the corresponding query parameter having been correctly recognized by a speech recognition system. 0 represents no confidence and 1 represents the highest confidence. See Note above
Language tag from [HTTP/1.1 specification section 3.10] (http://tools.ietf.org/html/rfc2616#section-3.10).
Request headers
Responses
Body
Unique identifier of the result.
Date and time of the request in UTC timezone using ISO-8601 format.
Contains the results of the natual language processing.
The query that was used to produce this result.
Deprecated
An action to take.
Parameters to be used by the action.
device : compute
Contains data on intents and contexts.
ID of the intent that produced this result.
Name of the intent that produced this result.
Contexts that were matched by the intent.
Contexts that were added by the intent
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
POST https://api.api.ai/v1/query HTTP/1.1
Content-Type: application/json
{
"query": "weather",
"timezone": "GMT+6",
"lang": "en",
"contexts":["weather", "local"]
}
The entities endpoint is used to create, retrieve, update, and delete developer-defined entity objects.
An entity is a data type that contains mappings between a set of synonyms (that is, ways a particular concept could be expressed in natural language) and a reference (canonical) value. See the Entities Overview for information on entities.
{eid}
{eid}
{eid}
Retrieves a list of all entities for the agent.
Request headers
Responses
Body
an array of entity description objects
ID of the entity
Name of the entity
The total number of synonyms in the entity
A string that contains summary information about the entity
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
GET https://api.api.ai/v1/entities HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"id": "33868522-5747-4a31-88fb-3cd13bd18684",
"name": "Appliances",
"count": 11,
"preview": "Coffee Maker <= (coffee maker, coffee machine, coffee), ..."
},
{
"id": "6d6b7d50-7510-4fec-927b-ac3c3aaff009",
"name": "Utilities",
"count": 4,
"preview": "Electricity <= (electricity, electrical), ..."
}
]
{eid}
Retrieves the specified entity.
Path variables
ID of the entity to retrieve
Request parameters
ID of the entity
Request headers
Responses
Body
The unique identifier for the entity
The name of the entity
An array of Entry objects, which contain reference names and synonyms.
A canonical name to be used in place of the synonyms.
The array of synonyms. Array of Strings that can include Entity Names, Inline Expressions, and other strings.
["New York", "@big Apple", "city that @{never, seldom, rarely} sleeps"]
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Creates a new entity.
Request headers
Request body
Responses
Body
The ID of the new entity.
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
POST https://api.api.ai/v1/entities HTTP/1.1
Content-Type: application/json
{
"name": "Appliances",
"entries": [{
"value": "Coffee Maker",
"synonyms": ["coffee maker", "coffee machine", "coffee"]
}, {
"value": "Thermostat",
"synonyms": ["Thermostat", "heat", "air conditioning"]
}, {
"value": "Lights",
"synonyms": ["lights", "light", "lamps"]
}, {
"value": "Garage door",
"synonyms": ["garage door", "garage"]
}]
}
{eid}
Updates the specified entity.
Path variables
ID of the entity to update
Request parameters
ID of the entity
Request headers
Request body
Responses
Body
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
PUT https://api.api.ai/v1/entities/{eid} HTTP/1.1
Content-Type: application/json
{
"id":"80f817e8-23fb-4e8e-ba62-eca1fcef7c3a",
"name": "Utility Types",
"entries": [
{
"value": "Electricity",
"synonyms": [
"electricity",
"electrical"
]
},
{
"value": "Gas",
"synonyms": [
"gas",
"natural gas",
]
},
{
"value": "Water",
"synonyms": [
"water"
]
}
]
}
{eid}
Deletes the specified entity.
Path variables
ID of the entity to delete
Request parameters
ID of the entity
Request headers
Responses
Body
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
The intents endpoint is used to create, retrieve, update, and delete intent objects.
Intents convert a number of user expressions or patterns into an action. An action is essentially an extraction of the user command or sentence semantics.
See the Intents Overview for information on intents.
{iid}
{iid}
{iid}
Retrieves a list of all intents for the agent.
Request headers
Responses
Body
ID of the intent
Name of the intent
List of contexts that must be set for this intent to be executed
List of contexts that are set after this intent is executed
List of actions set by all responses of this intent
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
GET https://api.api.ai/v1/intents HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"id": "32159aef-7cda-4f91-861a-d2f569780dcf",
"name": "What is the weather in @city",
"contextIn": [],
"contextOut": [],
"actions": [
"weatherForecast"
]
},
{
"id": "64301508-4b49-4b5d-8561-b514b2538f72",
"name": "turn on/off @appliance",
"contextIn": [],
"contextOut": [
"house"
],
"actions": [
"setAppliance"
]
}
]
{iid}
Retrieves the specified intent.
Path variables
ID of the intent to retrieve
Request parameters
ID of the intents
Request headers
Responses
Body
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Creates a new intent.
Request headers
Request body
Responses
Body
The ID of the new intent.
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
POST https://api.api.ai/v1/intents HTTP/1.1
Content-Type: application/json
{
"name": "turn on/off @appliance",
"contexts": [],
"templates": [
"turn @onOff @appliance",
"set @appliance @onOff"
],
"responses": [
{
"action": "setAppliance",
"affectedContexts": [
"house"
],
"parameters": [
{
"name": "state",
"value": "@onOff"
},
{
"name": "appliance",
"value": "@appliance"
}
]
}
]
}
POST https://api.api.ai/v1/intents HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
"id": "613de225-65b2-4fa8-9965-c14ae7673826",
"status": {
"code": 200,
"errorType": "success"
}
}
{iid}
Updates the specified intent.
Path variables
ID of the intent to update
Request parameters
ID of the intent
Request headers
Request body
Responses
Body
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
Examples
PUT https://api.api.ai/v1/intents/{iid} HTTP/1.1
Content-Type: application/json
{
"id": "613de225-65b2-4fa8-9965-c14ae7673826",
"name": "Set Appliance On or Off",
"contexts": [],
"templates": [
"turn @onOff @appliance",
"set @appliance @onOff"
],
"responses": [
{
"action": "setAppliance",
"affectedContexts": [
"house"
],
"parameters": [
{
"name": "state",
"value": "@onOff"
},
{
"name": "appliance",
"value": "@appliance"
}
]
}
]
}
{iid}
Deletes the specified intent
Path variables
ID of the intent to delete
Request headers
Responses
Body
The request could not be completed due to a conflict with the current state of the resource. This code is only returned in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. For example, deleting an entity that is used in an intent will return this error.
The status object is returned with every request and indicates if the request was successful. If it is not successful, error information is included. See Status and Error Codes for more information on the returned errors.
Example Successful Response
{
"status": {
"code": 200,
"errorType": "success"
}
}
Example Unsuccessful Response
{
"status": {
"code": 400,
"errorType": "bad_request",
"errorDetails": "Json request query property is missing"
}
}
ID of the error. Optionally returned if the request failed.
The entity JSON object contains information about synonyms and their reference value.
The name of the entity
An array of Entry objects, which contain reference names and synonyms.
A canonical name to be used in place of the synonyms.
Array of Strings that can include Entity Names, Inline Expressions, and other strings.
["New York", "@big Apple", "city that @{never, seldom, rarely} sleeps"]
The unique identifier for the entity
Legal name
An array of Entry objects, which contain reference names and synonyms.
A canonical name to be used in place of the synonyms.
Array of Strings that can include Entity Names, Inline Expressions, and other strings.
["New York", "@big Apple", "city that @{never, seldom, rarely} sleeps"]