Blog 4.0

Context-Sensing and Information 4.0

by Andy on 10 March 2017 1 comment

Want a glimpse at information access in the near future?

Download the Blippar (www.blippar.com) app.  Scan your hand and this is what you get:

4 screenshots from Blippar after blipping a hand

The app recognises it and crawls the web for a transmedia sampling of information. But it doesn’t stop there. Behind these results is an extensive ontology represented by spheres. Selecting one goes to a new set of results. We chose “finger.” Interestingly, one of the links is to “amputation.”

Blippar believes that visual recognition and augmented reality are their key innovations. True. However, we think the way they process and offer information to the user is even more significant.

What is really important here is that none of this information is static. If the Wikipedia “Hand” entry is updated, the next person who points Blippar at a hand will get that update. If new relationships are found, they will be delivered also. This is continuous information delivery, continuous update and mash up from a variety of sources.

So, when someone points at your product or software screen, will your user assistance show up? How can you guarantee this? And that the information offered is pertinent to the task s/he is trying to do? If it is, how can you be sure it appears as the most obvious choice?

That’s part of what we have to solve in the next information revolution.

Not Just Smart Houses or Factories

The four industrial revolutions from Wikipedia

The four design principles in Industry 4.0[i]:

  • Interoperability
  • Information transparency
  • Technical assistance
  • Decentralized decision

Industry 4.0 seems to have a lot to do with the Internet of Things (IoT), but it’s a lot more than objects determining the milk in your refrigerator is going bad. It’s a complex network of networks, in which objects take autonomous decisions that affect us directly in a host of ways.

In Industry 4.0 definition, Information transparency is a requirement. It is defined as the ability of information systems to create a virtual copy of the physical world by enriching digital plant models with sensor data. This requires the aggregation of raw sensor data to higher-value context information.

The way we see it, one of the most important products of Industry 4.0 will be information. Notice we said “product,” not “accompaniment to a product,” or “product instructions” or any other euphemism. We all need and want information, and in the future, it will be tailored to profiles that we most likely will not define ourselves.

We use Information 4.0 to refer to information in the context of Industry 4.0, whose characteristics are:

  • molecular – there are no documents, just information molecules
  • dynamic – continuously updated
  • offered rather than delivered
  • ubiquitous, online, searchable and findable
  • spontaneous – triggered by contexts
  • profiles automatically

Context-sensing will play a major role, and it goes well beyond our usual understanding.

The Context of Context

Intel[ii] defines context sensing as

“Information that must be collected and used soon, because, otherwise the information may not be valid anymore. This kind of context information is called “context state.” A group of “context states” comprise a “snapshot” of the current user’s context, such as location, user activity, and user’s surrounding environment. This snapshot is formally called the “state vector,” which contains a collection of “context states,” describing the user’s current context.”

This is OK as far as it goes, but Intel’s idea of a “state vector” may not have wide enough horizons.

Dr. Christian Glahn[iii] has developed a concept of context sensing that he refers to as “Discovery”, for which the central principles are:[iv]

  • Not always more of the same
    • If I just ate, I don’t need more restaurants
  • Meaningful connections
    • If I’m on a business trip and it’s just before a meeting, then I’m not interested in finding a gym
  • Follow rhythms
    • If I always eat dinner around 6:00pm, then I might be interested in finding a restaurant open at that time when I’m away from home

Applying these principles to user assistance and other information would not only produce context sensitive content in the static, traditional sense but highly personalised and dynamically contextualised content.

Mobile Evolution

How much time do we spend talking on our mobile “phones” per day? Without our noticing, they have transmuted from phones to Internet terminals. And they’re about to mutate again: from terminals to context-sensing devices. We’ll still phone home, and search for bizarre stories about pop singers on line, but their real function will be the elaboration of constantly evolving real-time state vectors.

Your mobile

Knows
Detects
Will soon know you
  • where it is (geolocation in 3 axes)
  • indoors or outdoors
  • in motion or still
  • on your body or not
  • the communications channels you use

 

  • objects, especially faces
  • input type (verbal, haptic, optic)
  • ambient noise
  • conditions (lighting, electromagnetic, temperature, barometric pressure)
  • proximity elements
  • current time, in all its aspects (local, season, day, date)
  • age, gender, family situation
  • behaviour (themes that interest you for work and leisure, learning style)
  •  networks – social as well as technological
  • history (previous states of your networks, applications, situation in space-time)
  • emotions …

Some terminals already “know” because we tell them, or your mobile service provider does. Tomorrow they will really know these variables by themselves and make decisions based on them. What happens when your phone starts comparing you to statistics from Big Data and factors your surroundings into your state vector?

Example

You pass a shoe store (part of a national chain) – Sam’s Shoes – in a shopping centre. Your terminal knows that you bought a pair of running shoes six months ago and determines you could buy a new pair, based on your time spent running. Correlating with the store, it finds your brand and model on sale there and alerts you to that.  It won’t do this if you are jogging – it will instead have the store send an email, and the store decides to include a voucher.

It’s not going to tell you about Sam’s Shoes national sale. It’s getting THIS Sam’s Shoes to tell you that YOUR CURRENT SHOES ARE ABOUT TO WEAR OUT, and suggest taking advantage of the sale to get the same ones again.

This level of personalization makes marketers salivate – but it will be a reality before we notice.

Levels of Context States

This implies that the context state, as defined by Intel, is not just about location, activity, or surroundings. It’s also about expressed interior states that the terminal has collected by monitoring metabolism, detecting emotion.

Coupling context state information with context history predicts mood, behaviour, needs… At the same time, terminals around us do the same. What if they start exchanging data so as to produce collective actions? Who decides what algorithms are used, and toward what end? How are these algorithms controlled?

We are suggesting here that life-changing actions and decisions might be chosen by machines. The information product offer that we receive on a daily basis will become ever more narrowly refined and focussed as it becomes more and more personalised. What about the element of surprise?

Context States and individual needs

Each client’s needs relates to his level of interest or requirements. He may be concerned by multiple domains with inversely varying interests or needs.

Handling this involves defining potentially thousands of personas. A coherent content strategy is indispensable for defining the information made available for them.

In user-centered design and marketing, personas are fictional characters created to represent the different user types that might use a site, brand, or product in a similar way… [From Wikipedia[v]]

Although primarily used in marketing, we have a more global vision of user personas: fictional characters created to represent different user typologies having similar experience sets and needs.

Let’s take an example in the energy industry. Various disciplines (processing, drilling, geology, geophysics, reservoir engineering…) interact to obtain results. People do not have the same levels of experience, competency, skills or know-how in each of these. So their requirements will vary depending on this matrix.

Evolution of competencies in time for four disciplines

A company, using a software platform (provided by a supplier) has objectives concerning the evolution of competencies in various domains. The supplier can’t decide these. They are defined by policy within the company.However, these evolutions can be mapped to persona journeys by the supplier (on a broader scale that the end client).A journey is a set of changes in state vectors –they are shown here in orange.In Information 4.0, we build for these vectors as individual content candidates.

Evolution of competencies – the user journey or experience

Each persona is tagged as belonging to a discipline. In reality this is not always the case. Awareness or skills in others are required to handle complex processes.

Mapping of user competency acquirement in time across the same four disciplines

If we look the same situation through an individual user journey the situation is different. While the company has objectives, the individual user will require or desire other unplanned competencies, even forget some or want to move to other domains outside his core domain.Integrating this phenomenon into our production requires metrics for tracking the real journey.Success will be measured firstly by levels of satisfaction, and secondly by company feedback on the improvement of competencies as a whole.

At no point have we talked about typing or structuring, and definitely not delivery.

Facilitating Individual Competency Learning

Users or clients, especially of technological products or software, are learners. Learning is part of a user journey and is designed in stages (changes in state vectors) that can be easily assimilated.  As information developers, we won’t decide when the user progresses. They will.

We can’t useful provide fundamental knowledge because we ignore what is already known. We can’t cater for everything – we do not know that. Users gather from a variety of sources, and the learning process is no longer linear. Our challenge is to fill in the gaps with information candidates they want or need. Users will learn what we don’t plan or expect. Our job is promoting a journey, facilitating the stages, and rewarding success.

Writing for happiness

Even if we don’t write for happiness, we need to write for success. Emotions will play a big part in how well our content serves our purpose. As an example, joy can be built on; contempt and disgust cannot. Sadness after reading a content track needs a pick-up proposal…

Emotion detection in Software UI (& UX response)

Sample of Some emotions from the Affectiva Developer Portal[vi]

Affectiva[vii] created the Affdex SDK so that others can bring emotion sensing and analytics to software, via facial expression recognition. This is AI providing a missing link. They also offer “Emotions as a Service”! We need to figure out how to map content to that emotion, so we need to sit down with them don’t we… Then we’ll have a UX response.

Evolving from Support to User Relationship

IBM Watson shows how AI can become more competent in conversational interaction. Bots and AI will provide help, allowing the first level of user support to be automated. It won’t take the human out of support. It will cater for repetitive cases and issues and detect the unresolved. Humans will intervene more in complex problem solving, expertise, hand-holding, foreseeing issues, improving existing practices, and even animating self-help.

Does this take away the requirement for information production? No, it doesn’t. The objective will be to provide less costly but more pertinent support based on profile, persona and history.  The management systems for this have not yet been designed.

Support, contextualised information, onboarding, learning and all other forms of information have to form a complementary coherent offering. They all help improve the journey and experience, without overlap or repetition or confusion of purpose.

With Industry 4.0, support will integrate validated user feedback and stakeholder input. The relationship with the user will be more integrated around a content-issue-learning engine.

Production

Our information production is massively like the Ford Model-T, in the second industrial revolution. In Industry 4.0, production is still about assembly and maintenance lines, so we can’t rely on it to help us define a production model for Information 4.0. Our next production revolution has to consider:

  • linear models being replaced by constant delivery or real-time availability
  • context tagging as an imperative, emotion tagging in the future
  • collaborative processes becoming standard
  • stakeholders driving goal-oriented efforts

Constant delivery or real-time availability will impact:

  • deciding the what, who and when of production and validation – this is a content strategy
  • managing feedback – this is curation, animation, but not moderation.

It’s easy to imagine users and clients contributing in this model of production.

Minimalist considerations reduce information overload, but don’t cater for everything – it’s a principle, not a method. Personas and context sensing require multi-faceted production. Catering for non-linear user journeys means making information independent of wrappings, making it molecular, so as to compose moles. The wrapping is volatile, virtual.

Information maps cannot be static for Information 4.0, or even as structured as we have now. Information 4.0 is lean, nimble, profiled and designed to be assembled spontaneously into an emotion-based persona response that is just a place on a map (in 3D).

What if the journey is suggested by a Content Trip Advisor? The Blippar example at the beginning uses the broadest content set possible mapped by semantic relationships. This is fine for unguided discovery – without a precise objective.

Our ontologies have refined purpose. For products, or software, for instance, information candidates and the relations between them are oriented towards onboarding, acceptance, familiarity, proficiency and eventually expertise – empowering and state vector changes.

 

Production example for software

The following is a concept, not to be taken literally, but for indicating the direction we need to take to produce Information 4.0 without drowning it in existing models.

Information will be:

  • embedded in the software UI
  • oriented UA and not UI description
  • discovered progressively

It will contribute to a persona state vector change scenario.

Micro molecular topics will have a functional content typology.

Each of these has:

  • content typology tag.
  • state vector content stage: onboarding, apprentice, proficient, expert, etc
  • emotion response
  • < ontology tagging  />

We will brand information in this manner.

The state vector content stage will be part of the mapping decided in content strategies for the personas we will define.

Does this model apply to physical products? Yes, if we can get Blippar to recognise them, for example.

Governance

Integrating AI, some IoT and Big Data will help marketing be more efficient and tailor and tune information more in terms of B2C. This is where the money will be.

As information developers, we don’t want other requirements to be left on the shelf.

That doesn’t mean ignoring the technology. In some cases, the borders between marketing and after sales information will be crossed, since information is part of the entire life-cycle.

Our job is to give purpose to information, improve the experience on the journey and provide governance as we integrate our stakeholders and end users more and more in the production process.

Governance will have to exist at various levels, notably in adopting a unified model for emotion detection, knowledge curation, or adapting existing ones to Information 4.0.

Cooperative production using agile is what we will see more and more – we need to train for it, and those doing it already need to integrate us.

Written by Ray Gallon and Andy McDonald. First published in TCWorld e-magazine, November 2016.

 

[i] Industry 4.0 – https://en.wikipedia.org/wiki/Industry_4.0

[ii] Intel® Context Sensing SDK retrieved from https://software.intel.com/en-us/context-sensing-sdk/details on 6 August 2016.

[iii] Christian Glahn is director of the Blended Learning Center at the Chur University of Applied Sciences in Switzerland.

[iv] AR Discovery – The Challenges,  retrieved from http://www.slideshare.net/phish108/ar-discovery-the-challenges-53902694?next_slideshow=1 on 6 August 2016.

[v] Personas  https://en.wikipedia.org/wiki/Persona_(user_experience)

[vi] Affectiva Developer Portal http://developer.affectiva.com/metrics/

[vii] Affectiva Developer Portal  http://developer.affectiva.com/index.html

AndyContext-Sensing and Information 4.0

Related Posts

Take a look at these posts

1 comment

Join the conversation
  • Laura Dent - 30 July 2017 reply

    “Context will drive the content of tomorrow.” – Andy, just now

Join the conversation