Tag Archives: Machine Learning

Beyond Google Search of Personal Data – Proactive, AI Digital Assistant 

As per previous Post, Google Searches Your Personal Data (Calendar, Gmail, Photos), and Produces Consolidated Results, why can’t the Google Assistant take advantage of the same data sources?

Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products.  The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.

As a first step,  Google’s “Personal”  Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data.  No doubt other Google products are coming soon.

Big benefits are not just for the consumer to  search through their Personal Goggle data, but provide that consolidated view to the AI Assistant.  Does the Google [Digital] Assistant already have access to Google Keep data, for example.  Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage?  If so, these…

interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.

Note: The “Google App” for mobile platforms does:

“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”

I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.

Modeling Personal, AI + Human Interactions

Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW).  Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.

Actionable data point correlations can trigger multiple goals in parallel.  However, the execution of goal based rules would need to be managed.  The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.

As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.

Bass Fishing Trip
Bass Fishing Trip

For example,

  • The proactive Google Assistant may suggest to book your annual fishing trip soon.  Elevated Interaction to Consumer / User.
  • The Assistant may search Gmail records referring to an annual fishing trip ‘last year’ in August. AI background server side parameter / profile search.   Predefined Assistant Consumer Workflow (ACW) – “Annual Events” Category.  Building workflows that are ‘predefined’ for a core set of goals/rules.
  • AI Assistant may search user’s photo archive on the server side.   Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.
  • Photos from around ‘August’ may be earmarked for Assistant use
  • Photos may be geo tagged,  e.g. Lake Champlain, which is known for its fishing.
  •  All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository,  goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.
  • In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.

This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process.  Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.

Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” .  Or, food may not be the best place to start, i.e. Amazon Dash

 

Using Google to Search Personal Data: Calendar, Gmail, Photos, and …

On June 16th, 2017,  post reviewed for relevant updates.

Reported by the Verge,  Google adds new Personal tab to search results to show Gmail and Photos content on May 26th.

Google seems to be rolling out a new feature in search results that adds a “Personal” tab to show content from [personal] private sources, like your Gmail account and Google Photos library. The addition of the tab was first reported by Search Engine Roundtable, which spotted the change earlier today.

I’ve been very vocal about a Google Federated Search, specifically across the user’s data sources, such as Gmail, Calendar, and Keep. Although, it doesn’t seem that Google has implemented Federated Search across all user, Google data sources yet, they’ve picked a few data sources, and started up the mountain.

It seems Google is rolling out this capability iteratively,  and as with Agile/Scrum, it’s to get user feedback, and take slices of deliverables.

Search Roundtable online news didn’t seem to indicate Google has publicly announced this effort, and is perhaps waiting for more sustenance, and more stick time.

As initially reported by Search Engine Roundtable,  the output of Gmail results appear in a single column text output with links to the content, in this case email.

Google Personal Results
Google Personal Search Results –  Gmail

It appears the sequence of the “Personal Search” output:

  • Agenda (Calendar)
  • Photos
  • Gmail

Each of the three app data sources displayed on the “Personal” search enables the user to drill down into the records displayed, e.g.specific email displayed.

Google Personal Search Calendar
Google Personal Search Results –  Calendar

 Group Permissions – Searching

Providing users the ability to search across varied Google repositories (shared calendars, photos, etc.) will enable both business teams, and families ( e.g. Apple’s family iCloud share) to collaborate and share more seamlessly.  At present Cloud Search part of G Suite by Google Cloud offers search across team/org digital assets:

Use the power of Google to search across your company’s content in G Suite. From Gmail and Drive to Docs, Sheets, Slides, Calendar, and more, Google Cloud Search answers your questions and delivers relevant suggestions to help you throughout the day.

 

Learn More? Google Help

Click here  to learn more on, “Search results from your Google products”  At this time, according to this Google post:

You can search for information from other Google products like Gmail, Google Calendar, and Google+.


Dear Google [Search]  Product Owner,

I request Google Docs and Google Keep be in the next data sources to be enabled for the Personal search tab.

Best Regards,

Ian

 

AI Whispering Digital Co-Counsel for Any Litigation

Are you adequately prepared for your next litigation?  Going into court with an army of Co-Counsel making you feel more confident, more prepared?  Make sure you bring along the AI Whispering Digital Co-Counsel.  Co-Counsel that doesn’t break a sweat, get nervous, and is always prepared.  He even takes the opportunity to learn while on the job, machine learning.

The whispering digital agent for advising litigators “just-in-time” rebuttal citing historical precedence, for example.  Digital Co-Counsel analyzes the dialog within the courtroom to identify ‘goals’, the intent of the conversation(s).  The Digital Co-Counsel identifies the current workflow, which may be identified as Cross or Direct examination, Opening Statement, and Closing Argument.

Realtime observation of a court case and advice based on:
  • Observed dialog interactions between all parties involved in the case, such as opposing counsel,  witnesses, subject matter experts, may trigger “guidance” from the Digital Co-Counsel based on a compound of utterances, and identified workflow.
  • Court case evidence submitted may be digitized, and analyzed based on a [predetermined]combination of identified attributes of submitted evidence.  This evidence, in turn, may be rebutted, by counter arguments, alternate ‘perspectives’ or present “evidence” to rebut
  • The introduction of ‘bias’ toward the opposing council.**

Implementation of the Digital Co-Council may be through a Smartphone application, and use a bluetooth throughout the case.

My opinions are my own, and do not necessarily reflect my employer’s viewpoint.

Hey Siri, Ready for an Antitrust Lawsuit Against Apple? Guess Who’s Suing.

The AI personal assistant with the “most usage” spanning  connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives.  An Amazon commercial just aired which depicted  a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her.  The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.

What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:

  • Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.
  • Proactive, suggestive actions  may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.

Three main sources of AI Personal Assistant value add:

  • A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ;   in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted.  In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”
  • An abstraction layer between the user and their apps;  The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.
  • User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day  rituals.

Proactive and chatty assistants may win the “Assistant of Choice” on all platforms.  Being proactive means collecting data more often then when it’s just you asking questions ADHOC.  Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location.  E.g. “Don’t forget milk” says Siri,  as your passing the grocery store.  Around the time I leave work Google maps tells me if I have traffic and my ETA.

It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.

Microsoft Corona has an iOS app and widget, which is wrapped around the OS.  Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.

Antitrust violation by mobile smartphone maker Apple:  iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri,  “press and hold home button” capability that launches the default packaged iOS assistant Siri.
Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.

Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default.  Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.

ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution.  Has Apple has taken steps to tightly couple Siri with it’s iOS?

AI Personal Assistant ‘Wish” list:

  • Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions.   The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps.  After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”
    • The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”.  There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders.  In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.
  • Capability to interact with IoT using user defined workflows.  Hardware and software may exist in the Cloud.
  • Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.

Apple could already be making the changes as a natural course of their product evolution.  Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.

The Race Is On to Control Artificial Intelligence, and Tech’s Future

Amazon, Google, IBM and Microsoft are using high salaries and games pitting humans against computers to try to claim the standard on which all companies will build their A.I. technology.

In this fight — no doubt in its early stages — the big tech companies are engaged in tit-for-tat publicity stunts, circling the same start-ups that could provide the technology pieces they are missing and, perhaps most important, trying to hire the same brains.

For years, tech companies have used man-versus-machine competitions to show they are making progress on A.I. In 1997, an IBM computer beat the chess champion Garry Kasparov. Five years ago, IBM went even further when its Watson system won a three-day match on the television trivia show “Jeopardy!” Today, Watson is the centerpiece of IBM’s A.I. efforts.

Today, only about 1 percent of all software apps have A.I. features, IDC estimates. By 2018, IDC predicts, at least 50 percent of developers will include A.I. features in what they create.

Source: The Race Is On to Control Artificial Intelligence, and Tech’s Future – The New York Times

The next “tit-for-tat” publicity stunt should most definitely be a battle with robots, exactly like BattleBots, except…

  1. Use A.I. to consume vast amounts of video footage from previous bot battles, while identifying key elements of bot design that gave a bot the ‘upper hand’.  From a human cognition perspective, this exercise may be subjective. The BattleBot scoring process can play a factor in 1) conceiving designs, and 2) defining ‘rules’ of engagement.
  2. Use A.I. to produce BattleBot designs for humans to assemble.
  3. Autonomous battles, bot on bot, based on Artificial Intelligence battle ‘rules’ acquired from the input and analysis of video footage.