Analyse text entities, sentiment, syntax and categorisation using the Google Natural Language API

gl_nlp(string, nlp_type = c("annotateText", "analyzeEntities",
  "analyzeSentiment", "analyzeSyntax", "analyzeEntitySentiment",
  "classifyText"), type = c("PLAIN_TEXT", "HTML"), language = c("en", "zh",
  "zh-Hant", "fr", "de", "it", "ja", "ko", "pt", "es"),
  encodingType = c("UTF8", "UTF16", "UTF32", "NONE"))

Arguments

string

A vector of text to detect language for, or Google Cloud Storage URI(s)

nlp_type

The type of Natural Language Analysis to perform. The default annotateText will perform all features in one call.

type

Whether input text is plain text or a HTML page

language

Language of source, must be supported by API.

encodingType

Text encoding that the caller uses to process the output

Value

A list of the following objects, if those fields are asked for via nlp_type:

Details

string can be a character vector, or a location of a file content on Google cloud Storage. This URI must be of the form gs://bucket_name/object_name

Encoding type can usually be left at default UTF8. Read more here

The current language support is available here

See also

Examples

# NOT RUN { text <- "to administer medicince to animals is frequently a very difficult matter, and yet sometimes it's necessary to do so" nlp <- gl_nlp(text) nlp$sentences nlp$tokens nlp$entities nlp$documentSentiment ## vectorised input texts <- c("The cat sat one the mat", "oh no it didn't you fool") nlp_results <- gl_nlp(texts) # }