header.png

UUMAX Blog

3/28/2023 patent issued for uumax AI-Powered User Experience

We are thrilled to announce that Imageteq has been awarded patent #US11614952B2 by USPTO, effective March 28, 2023:
Systems and methods for providing modular applications with dynamically generated user experience and automatic authentication.

This achievement would not have been possible without the hard work and dedication of our talented team, and we couldn't be prouder. This patent represents a significant milestone in our company's history and reinforces our commitment to innovation and excellence. Thank you to everyone who has supported us along the way, and we look forward to continuing to push the boundaries of what's possible.

Artificial Intelligence (AI) has been in the news a lot lately. All sorts of applications are being released, announced, and various happy and dire predictions being made.  Did we set out to “change the world”. No, we leave that to the Steve Jobs, Elizabeth Holmes, and Richard Hendricks. Our goal was to make lives easier for everyone through more efficient human-machine interactions via more intuitive user experiences, generated through the use of AI along with any and all available information. Ultimately, we want to provide the best, most efficient user experience from the perspective of the application's functionality.

Our patent covers:

  • Predictive transactions and dynamic UX & UI

  • Automatic and continuous fuzzy authentication & authorization

  • Group authentication and experiences

So, what is the essence of the invention? Quite simple: Our invention has to do with approaching the "mind meld" of human and machine. Put in other words: we aim to minimize any actions on the part of the human user before the response is anticipated and automatically generated by the machine. Artificial intelligence (AI), machine learning (ML), advanced cache management, are some of the important elements of the system built using our concepts. These elements may have existed elsewhere, but we came up with a unique way of bringing all of the pieces together and applying them to solve real-world problems.

Our innovation has to do with making user experiences dynamic and automatic, with no-to-minimal user actions. We rely on proactive mechanisms and predictive actions where the users do not actually need to do anything to get authenticated, get authorized, get served up customized screens (UI) or other interactive experiences, and groups of users need not take any explicit actions for a system to adapt to a particular set of individuals.

The concepts can be used in many types of applications:

  • Web and mobile apps, along with apps for voice devices (e.g. Alexa), smart watches and other wearables, IoT, etc.

  • Games

  • Financial apps

  • Entertainment

  • Hotels and hospitality

  • Military

  • Medical

  • Hospitals and assisted living facilities

  • Robotics

  • Self-driving cars

  • Drones

  • Industrial robots

We started by asking: why do web and mobile apps have the same fixed navigation menu or list of icons? Why do I need to click through a bunch of screens just to get to “pay bill”, for example? Can’t the system anticipate and serve to me what I want as soon as I log in? For that matter, why is the login such a hassle? Hence our invention!

The purpose of applying our concepts to a system ("our system”, as we call it here) is to provide automatic responses, proactively, not reactively.  As we state in our patent, "the purpose …  is to make the overall user interactions as efficient and seamless as possible, in effect approaching a virtual ‘mind meld’ between a user and the backend systems.”

From one of our early brainstorming sessions back in 2016.

Our system includes a modular design that enables connectivity between any type of user device and any type of backend system. In addition, we ensure that the user experience and user interface for any type of platform (e.g., web, mobile, voice, TV, and/or the like) will dynamically change from user to user and/or situation to situation and will be customized for each user based on history, available data (e.g. ERP, CRM, LDAP, instruments and sensors, calendars, social media, etc.), location, time and/or other public or opt-in information. The system further ensures that authentication and authorization is completed automatically in the background by matching available authentication data to the desired system data. A user may also be re-authenticated or reauthorized in the background to enforce security protocols put in place by the backend systems. The system may also implement group user experiences with an automatic group-based authentication in addition to just merely an individual’s user experience.

Our patent filing calls out "common context of usage" for a group.  We are not just identifying individuals and looking for similarities, we are treating the group itself as a unique entity, as a new "individual" if you will. This is also done dynamically and proactively - to identify a family group at an airport for example, or a group of enemy soldiers on the battlefield.  The proposed system does not require 100% certainty about who the individuals comprising a group are.  We look at probabilities of who the group might be and automatically trigger actions based on the probability.  More aggressive actions for higher percent certainty and less aggressive actions for lower percent certainty.

The system does not need to interact with the group or ask them for input. We assess available data to determine the course of action.

From the perspective of criteria used to determine a user's experience, our patent proposes the use, not only of the user's past behavior within the app or related data, but also other external information such as their social media posts, other auto retrieved meta data etc. Only opted-in data would be used of course.

For authentication and authorization, we propose a fuzzy approach, while taking into account different types of user experiences. There are individual (human) user experiences. There are (human) group experiences. There are human user (or users)-to-machine, as well as machine-to-machine experiences, where machines could be anything, such as self-driving cars, robots, application programs, smart home automation devices, smartphones, smart TVs, drones, etc.

Authentication and authorization (AuthN & AuthZ) are a major part of everyday user experience and these are covered by the patent as well. AuthN & AuthZ are completely re-imagined in our vision. Again, using the human-machine mild mend approach, we ask: why is authorization commonly a “yes” or “no” decision? Does the machine really need to have the same level of confidence that the user is who the user claims to be if all the user wants to do is submit a support ticket versus, for example, transferring money out of an account? The level of access to the system in our view depends on how confident the system is that it’s talking to the person it thinks it’s talking to. Not only is the authentication and authorization process more flexible as a result, it’s also a continuous process, where, as additional data is obtained, the level of authorization is adjusted – up or down - depending on the circumstances. A system would automatically attempt to increase the level of certainty (i.e. the trust level), by continuously applying available information to dynamically and automatically attempt to up the level of certainty – without having to rely on the end user. This means active not passive auto-improving to increase level of confidence. Furthermore, the input criteria itself can be dynamic.  As more is learned about the object to be "recognized" or "authenticated", additional evaluation criteria can be fed into the system so that available data can be compared against the new criteria, in order to increase certainty level.

Put in other words, re-authentication is not to give the same user the exact same level of access as before. Re-authentication, using fuzzy logic, may result in a different level of access granted to the exact same user. The user may not even know that he or she has a different level of access. It is done automatically and in the background. The system is proactively checking who the user is and the level of confidence the system has in that determination. This is the key difference. The system may realize that the user may not be who the system thought it was (or has reduced level of confidence) and lower level of access. Or, conversely, the system may realize that the user is exactly the right person, even by the way the user is interacting with the system - so the interaction itself becomes part of the authentication or re-authentication experience. In fact, re-authentication is continuous and intricately tied to the dynamic experience envisioned by our invention.

Here is an analog of the situation in the physical world: you are at a crowded venue, a concert or the airport. And you see Bob off in the distance. Well, you think it’s Bob. Do you wave to him? You don’t want to make a mistake and wave to a stranger. As he get closer, you think, “yeah, I’m pretty sure that’s Bob. I’ll go ahead and wave.” So you wave. The risk is low. (This is the equivalent of a low risk recognition and granted access scenario.) Now, Bob is right in front of you, but you remember that Bob has a shady twin brother, Bill. And now you’re not so sure. So you say “hey Bob” and he says “hey” back. And then he says “can I borrow $100 from you?” Well, now you want to be sure, so you ask him something only Bob knows (which is to increase the level of confidence).

A system based on our patent, would provide for group authentication automatically.  The automatic grouping of people would be done using AI, fuzzy logic and meta-data.

This concept applies not only to authenticating a human user, but also for machine-to-machine authentication.  That is, one machine or system uses various criteria described above to identify another “machine”. 

What are some other applications of this invention?

The examples are numerous and include experiences such as common consumer applications, e.g. Pay My Bill, military applications, various types of robotic devices – in the factory, hospitals, assisted living facilities, drones, self-driving cars.

  • Mobile & Web. Imagine an extremely secure mobile application that does not ask you to do anything to log in. It just goes ahead and logs you in. Every time you use the application, it just seems to know what you want to do - pay a bill, submit a support ticket, update preferences - and presents the corresponding screen without you doing anything, as if by magic.

  • Gaming. Consider video games that automatically adjust to each user’s needs, desires, interests, etc., perhaps even those the user is not aware of. “Wow, how did this game know that I always wanted to ride an ATV on a deserted beach in Baja, while being chased by my favorite Hollywood personality?! It’s like it’s reading my mind!”

  • Military applications:

    • Military drone. Picture a drone observing a group of people gathered in a remote location. Possibly hostile. Apply probabilistic evaluation with available data - face recognition on some individuals (higher probability), height or dress recognition (lower recognition), possession of weapons (lower recognition, but higher probability of hostile intentions). Do we release a missile?  Do we send in special forces?  Or just continue to monitor?

    • A drone flying over a battlefield looking for wounded vs dead. A drone makes multiple passes and evaluates criteria - heat signatures, movement, air composition, etc. - to pick out possible signs of life.  The system recommends additional flyovers, passes to up the level of certainty.              

  • Self-driving cars

    • A car filled with college buddies is driving itself from San Francisco to LA. The car decides on its own where to stop for breaks - e.g. an Italian restaurant in San Louis Obispo, where it can recharge while the group gets food.

    • A car selects and takes optimal route to drop off passengers. Maybe the drunkest ones get dropped off first (before they get sick)?

    • Self-driving car auto-recognition of owner and automatic unlocking of the car. 

  • Robotic assistants

    • A robot attending to a group in a group setting (robot butler) or robot at a hospital.

    • A robot cook preparing a dish that everyone in a group likes.

    • A robot roaming a crowded nursing home looking for specific person - e.g. to summon, to give medicine, etc.  Almost dog-like, possibly using "smell", i.e. localized air composition analysis

  • Entertainment

    • A TV (or some other screen – e.g. computer, tablet, etc.) showing appropriate content for the entire group watching (e.g. if there are kids present)

    • Given a group of employees looking at a screen, e.g. during a presentation, only see appropriate content that all have permissions to view

  • Security or law enforcement

    • Detecting a suicide bomber in a crowded space (airport, train terminal, etc.). For example, the system may scan an area looking for particular criteria, e.g. a person wearing heavy clothes in a hot climate location. As more is learned about the potential suspect, the system admin may feed additional, previously unspecified criteria into the system. For example, gender of the person, height, weight, age, disability (e.g. limp), other particulars the system had not been initially set up to evaluate.  Here the criteria itself is dynamic.                            

    • A law enforcement agency looking for a spy. All that is known is that the spy is a young male spy and could possibly be disguised as a woman. The system evaluates known criteria and assigns levels of certainty, but attempts to dynamically evaluate every new piece of information as it comes in.  The system is also evaluating possible anomalies. This means doing a "What Is Wrong With This Picture" analysis, ie. "NOT" logic (a is NOT x, b is NOT y, c is NOT z, etc.):  too much clothes, too much makeup, eyeglasses look wrong, the walk is not like a woman.  The system is continuously trying to up the level of certainty.

    • Seeking out sick people (e.g. Covid) at a crowded airport/space.                                                             

  • Rescue operations:

    • A drone flying over some part of the ocean looking for survivors or lost at sea.

    • Drone flying over a disaster area (hurricane, tornado, etc.)

  • System to system authentication or recognition.       

    • Robot recognizing another robot based on some criteria.  Perhaps in a military situation - robot soldiers on the same side deciding not to shoot each other.

    • Drone recognizing another drone. Two delivery drones heading to the same location for redundancy reasons recognize each other and only one makes the delivery.  Or military application, e.g. "don't shoot me".

  • Car services

    • Car-for-hire (e.g. Uber, Lyft, etc.) – automatically recognizes the user – besides user just holding phone, could use other signals – facial recognition, common location. The car then automatically makes comfort-level adjustments to fit you (seats, air conditioning, music)

  • Robot assistant (e.g. home robot butler, or healthcare assistant, etc.) auto-recognition of users. It could be an individual user or a group of users.

  • Hotels and hospitality

    • Hotels allow you to do self-check-ins and customize room amenities.                                

  • Payment systems – in cashless society, in a supermarket or other store, if you opt in and have an account, the system knows who you are and charges your account without the use of phone, credit cards, etc.

  • Cafes or a fast food place (or automated restaurants) recognize you and suggest meal & drink choices

To be clear, our invention is not just about providing recommendations (such as places to eat). The recommendation engines are well-known and widely used. Our proposed system is much broader and generic in nature – it is that the behavior of the application or system itself is optimized.  For example, we can use user's social media content, user's past behavior within the app, other auto retrieved meta data etc. and AI, to derive the behavior of our application, plus we can modify the behavior on the fly for a truly dynamic experience.

Our invention applies equally applies to user experiences on all sorts of platforms - web, mobile, voice (or other audio), TV, visual or eye & body motion/gesture, touch, Virtual/ Augmented Reality Systems (VR/ AR), smart watches, smart contact lens, smart speakers, other implants, robotic assistants & devices (home, work, automobile, drone), telepathic & brainwave-based inputs, specialized devices for the disabled, automobile dashboards, and so on.

We have already been applying our patented concepts to our B2B software for service providers in the telecommunications and utilities sectors:

Please contact us to learn more or to schedule a demo.

 
 
 
 
 
 
 
 
 
 

IMAGETEQ TECHNOLOGIES, INC.
533 AIRPORT BLVD SUITE 400 
BURLINGAME, CA 94010
info@uumax.cx
info@imageteq.com