Search

My C World

A Learning Site

Category

Uncategorized

Putting AI to work ?…Wait!

(Under Creative Commons License)

Hey AI Product Manager!

So, you built an AI/ML (Artificial Intelligence/Machine Learning) model and demonstrated its performance to your business leaders, and now want to put your model to action and show the business value…

Now starts the daunting task of putting together a coherent roadmap to roll out the AI/ML. The roadmap needs to accommodate business process owners, IT application owners, OCM advisors, GRC leads (Governance, Risk and Compliance), Finance Controllers, and others you probably never met before. This is a challenging task many data scientists turned AI/ML product managers run into when putting their AI/ML into action.

In this article, I am summarizing my point of view based on my experience building such roadmaps.

Rediscover the Business Process

First step is to deepen your understanding of the business processes you are targeting for your AI/ML and rediscovering it. Let me explain why.

Business processes typically forms the core of any organization and in many cases gives the firm a competitive advantage. For example, Amazon’s competitive advantage in logistics, is founded on a set of well evolved and fine tuned business processes.

In almost all of the cases, the AI/ML you have developed will cater to such business process. Hence, it is important to understand the business process and see how AI/ML improves or optimizes it. So, work closely with the respective business process owners to understand the intent of the business process than its form.

But, wait, why rediscover it?

Well!, Optimizing business processes for scale and efficiency has been key to any organization’s health and performance. Traditionally, such optimization work has centered on human skills (and intelligence), coordination and tools.

AI/ML (Artificial Intelligence and Machine Learning) offers a radically different way of optimizing business processes as it approaches the human skills and coordination problem differently. A human centric process design accounts for task coordination and information flow, but an AI centric process design doesn’t need to.

As a result, when applying AI/ML to business processes, it is important to rediscover and analyze the decision making complexity and information flows assumed and embedded within the existing business process.

To simplify this analysis, I propose four business processes archetypes in an increasing order of decision making complexity and information flows:

  • Static Archetype – These processes generally have workflows that are static, relatively straight forward, and simple to observe or experience. For instance, on-boarding a new employee can fall under this category. These processes are measured for speed (read cost/time) and consistency and are governed for resourcing and performance.
  • Dynamic Archetype– The workflows within these processes are scenario heavy and use extensive logic and conditions to route the work. These are measured for compliance and risk and governed for exceptions and deviations. An example is a product recall process. You typically have several teams involved with strict guidelines defined.
  • Intelligent Archetype– These processes have workflows that are loosely defined and judgment heavy. They are measured for the quality of the output and governed for impact and relevance. An example is the product design process. It is pretty creative and experiences heavy.
  • Adaptive Archetype– These processes have workflows that are fluid, use multiple models to achieve a business outcome. They are measured by the business impact they make and are governed for ethics and the change they bring in. An example is acquiring a company. You use financial, strategy, organizational models; Adapt your approach frequently based on market conditions and negotiations, these are pretty fluid in nature.

Choose your AI/ML Deployment Modes

Once you analyze your business process, the next step is to explore various ways you can deploy AI/ML into the business process. I propose four modal framework to plan for your AI/ML deployment, borrowing ideas from my own experience and from literature:

This image has an empty alt attribute; its file name is image-2.png

In its most basic form, AI can be implemented as an Intern where it’s performance and behavior can be benchmarked, analyzed and understood. It is the time to stress test and fine tune sensitivity for the real world action.

Once AI is readied for action, it is advisable to introduce it as an Assistant, where it can provide support, influence, or even a nudge to a human for faster or more efficient decision making.

Overtime, confidence and trust will be built in the AI, when you can carve out an ‘area of responsibility’ for the AI, making it effectively the owner of the process area. This ‘Peer’ mode, is where you can realize some hard and tangible value from your AI efforts.

Finally, in some cases, AI can be entrusted with some managerial tasks too, such as distributing, coordinating and evaluating the work across the business processes.

Please note that, it is not always necessary to follow these steps sequentially. Some steps could be skipped or optional depending on several factors and the process under considerations.

Building your Roadmap

Business processes may have varying degrees of sensitivities towards automation or change, especially those areas that involve decision making . AI Product Leads should work with respective process owners to evaluate several factors before developing the deployment roadmap.

I will describe four of these factors that I think are important. Each of these four factors has top two sub-factors:

  • Disruption: How much disruption are we introducing.
    • Upskilling or Reskilling – Does the AI drive reskilling or upskilling (people) across the process chain
    • Automation – Does the AI make any of the existing roles redundant
  • Risk: What is the incremental risk we are introducing.
    • Risk and Liability – Are there any risks or liabilities arising out of AI doing the step? Did you assess them?
    • Accountability – Who will you hold accountable for performance gaps or missteps tracing back to AI? Do you need AI to explain rationale behind the decisions it makes?
  • Design: How to account for co-existence and feedback
    • Learning – Did you architect a learning loop into your AI? How do you plan to make AI relevant to its intended job function on an ongoing basis?
    • Integration – Did you figure out all the possible integration points when deploying AI at a process step? For instance, some steps may need RPA (Robotic process automation), or digital upgrades or integration with legacy systems.
  • Change: How to future-proof AI/ML against change
    • Adaptability– How do you plan to ensure AI can still do the job and adapt even if the workflow mutates?
    • New Skills – How do you update AI with new skills as they are needed, without hampering existing performance?

The above considerations should provide a qualitative frame of reference for developing your AI/ML roadmap/roll out plan, and can also help as a phase gate checklist.

I would like to end this article with a note on the current state of OCM (Organizational Change Management). OCM framewors such as ADKAR make a few fundamental (human-centric) assumptions about the nature of changes that can occur in a business process. With the explosion of AI/ML and the nature of what qualifies as intelligence, it is time for us to build OCM 2.0.

What do you think about this article? Would love to learn from your experiences in putting AI/ML to work in your domain. Please do share.

Case for your own Data Annotation Platform – Part 1

What is Data Annotation?

Data Annotation (some time also called as data labeling) is the process of defining ground truth in unstructured data such as images. Think of ground truth as the gold standard (as defined by humans for AI/ML to mimic), which defines the objective an AI/ML model needs to meet.

For example, if you want to train an ML model to detect cars, you need to annotate or label many car images, accounting for most possible variations (Sedans, Compacts, SUVs, colors, shapes, age etc.) in the real world, so that a machine can construct a model of how cars look.

In this short article, we will explore data annotations in the context of an enterprise scale AI development and deployment.

What’s the big deal about it

Building AI/ML models typically needs hundreds or thousands of ground truth examples to train the model. Of course, the actual training volume needs depend on the task complexity, ML algorithm, real world data variability and collect-ability.

As a result, from a work coordination and execution stand point, annotating data is generally one of the most complex and time consuming part of the whole AI building and improvement process.

In my experience, the total time and effort data annotations take is typically 20-40% of the overall effort in developing AI. Any enterprise that wants to do AI at scale has to think about addressing this crucial step.

How is it typically done

Classifying or labeling objects of interest within an image takes painstaking diligence. Annotations in many cases are done by data scientists, sometimes assisted by a closed group of individuals. At the other end is a crowd-sourced model such as Mechanical Turk, which might need a closed group of experts to initially seed the process with the right content and examples, so that scalers (people who scale up the work) can pick up the slack.

Once desired volume of data is annotated, data scientists who oversee the whole process, will use these annotations to train the models. During the training some additional annotations to tighten up the model learning might be needed. Finally, once the model is in production, the continuous learning process might need some more data annotations to regulate the desired performance. Thus data annotations is not a one time work, but has relevance through out the AI life cycle.

There are several tools in the market that support the above described data annotation process, be it one-person labeling or true crowd sourced labeling. Some of the popular options are: Amazon Mechanical Turk, Label Box, Microsoft VoTT, MIT LabelMe, Google’s AutoML Labeling etc.

Choosing your data annotation Platform

For a sustainable and efficient enterprise AI development, firstly it is important to move away from an annotation tool mindset to an annotation platform mindset.

A data annotation platform, in addition to the annotation functionality, includes workflow, marketplace, guardrails/governance and controls for an enterprise use. For example, the platform should be able to support multiple data scientists, annotators, and projects simultaneously.

The following are some of the most important aspects when choosing a data annotation platform.

  • Process: A structured, repeatable, and reliable data annotation workflow
  • Scale: Ability to have multiple users annotate in a consistent, collaborative and coordinated manner; Crowd source-ability.
  • Data: Ability to annotate various types of data such as imagery (includes RGB, LiDAR, Infrared, 3D MRIs, etc.), video (360, HD/8K, aerial, ground), audio (stereo, mono, surround etc), text (language, orientation etc.)
  • Annotations: Ability to support various annotation techniques such as, bounding boxes, polygons, masks, or pixel based annotations – semantic segmentation etc. for imagery.
  • Integrations: Annotate once and publish in many different formats for data scientists to use (e.g., TF Record, COCO, Pascal VOC etc.)
  • Security: Ability to secure your data and annotations. e.g., limit and restrict who can see your images and annotations. Control over your data and annotations. Grant/Revoke access.
  • Manage and Support: Ability to manage annotation data either batch or trickle feed – typically as a feedback to improve models; Implement new features as needed (modular architecture)
  • Isolate: Ability to isolate projects by scope, purpose or other considerations; e.g., making projects as public or private.
  • Asset-ize: Architectural flexibility to license your data and annotations to vendors, partners or researchers
  • Incentive: Ability to attract and reward annotators as annotation is a very monotonous job

Why a custom one makes sense

Given the above requirements, most options available in the market, which are either commercial off the shelf or open source do not make it to the shortlist. While they have strength in some aspects, they completely miss the mark on the others.

Therefore, It makes more sense to build an annotation platform for your own purpose. The high barrier for custom development can be circumvented through customizing or extending an open source implementation such as Microsoft VoTT.

Stay tuned to the next article to learn more about how to go about building your own enterprise scale data annotation platform.

App Innovation through ‘Gene’ Mutations

I wonder how the world as we know it came a long way since the Cambrian explosion half a billion years ago.  Consider this: On an average, it takes about 20 years for a human being to impart a series of meaningful changes (a.k.a mutations) to the collective human gene pool (given people to start having kids after their teens!). No wonder it took so long time for us evolve (~100K years) from apes.

The more I think, the more it correlates to the digital revolutions we started since late 1900s, when we first started to create programs. The code we write (e.g., your Uber mobile app) is akin to the DNA except that this digital DNA is being mutated through controlled human supervision (your Uber app becomes smarter over time – not by itself but through the product team sitting at Uber!).

What if it is let run wild with no Adult Supervision??

Was curious and started thinking how is it even possible. I assume we change our DNAs through habits. The more we use our body for a specific purpose, the more it develops and customizes itself for that task (just like we lost tails or developed larger brains!)

Imagine if your app can understand its features, and look at the world and struggle for its survival (…read reinforced learning…). Now give it the ‘eyes’ and ‘senses’ to see the world and let it improve and change its ‘DNA’. See, what happens!

My hypothesis is the app can mutate and evolve by itself over time to suit its changing environment (obviously, we define the rules of the game on an ongoing basis – what it can develop and what it cannot develop). Only those features (mutations) survive that fit to the environment.

Now some fiction: I believe consciousness is the result of complexity. Any thing sufficiently complex can become self-aware over time. If our app is self-aware and constantly evolves for its survival, mutating itself probably once every day instead of taking 20 years, it takes about 13 years to reach a relative complexity of that of humans versus apes.

This gets me to a larger troubling question: Are we already some apps on a super beings’ smartphone? Are we living in an artificial creation or simulation?

Jokes apart…

Imagine your business service offering improving itself over time, with a little insight from your product innovation leads. Imaging your strategy reshaped constantly by arbitrating the supply against demand or fostering new demand.

Your app code is the DNA, Mobile and other digital devices are the organisms, we are the Nature (we, as consumers, collectively determine the feature’s survival chances) and your software becomes the intelligence that can meaningfully mutate.

How can we do it?

Let’s quickly look at how traditional application innovation is done. A close group of people possibly borrowing ideas from outside or on their own research, think about the improvement idea. It is followed by an experiment (like A/B testing), a proof of value and may be a pilot, followed by a full blown launch. Some of these phases may be switched or skipped, of course.

Now, think about this: How about throwing it to the users. Imagine you develop only the ‘seed’ app and open it up to people to install and use them. You help your ‘seed’ software mutate quite often to produce a new feature or alter an existing one. The more people use it and like it, the more prevalence that feature becomes in its feature pool thereby defining the stage of the application evolution.

Now how can we let our software randomly create sensible mutations. First and foremost teach your software to understand what sensible mutation means. AI needs to be embedded in your software such that it is aware of what a meaningful change is. For instance, if they use a particular feature often, make your app intelligent enough to readjust its UI to adjust for the user or fast track enhancements to these features if many are using it.

For doing this algorithms need to be able to recognize value adding features from various apps. For example, a training dataset can easily be built using various popular apps on the Apple and Android store, considering user reviews and the app feature  upgrade history. Depending on the category of app, features may differ, but the essence is in understand what do users feel valuable in an app.

A banking app may need to have features such as payment, transfers, profile update, check deposit etc. A possible step could be to abstract these to a generic lifestyle need, in this case, transactions and identity.

So Let your apps go wild and have fun!

I wish I could make this content self-aware and improve itself based on viewers’ feedback!

Cheers!

Kumar Mankala

Missing the taste of Globalization

Post crisis, many people at “a natural advantage (read rich nations)” across the world are realizing that globalization is impacting them negatively. Look at Brexit or even Trump.

Though people embraced globalization rapidly (during the peaks of economies), they were not really ready for it (competing with something that they don’t know much about – think about a US factory worker competing with someone in China they don’t know anything about).

Though people are not to be blamed for this withdrawal from globalization, the world was in short technologically ready for globalization but people were not mindset wise. People’s globalization sensitivity to economic changes was very high. This is not only across nations but internally within countries as well.

However, the economies got a taste of how globalization is. Even if the borders were to be enforced as they were before the G (Globalization) happened, the economics will continue to demand the rules of the game be the same as or better than that during the G world.

So, now the world is torn apart by two forces: Political towards pre-G/no-G, Economy towards G. It will be interesting to see who will win. Both forces are irrational, but if history were to testify, Economy’s memory is short lived and that of Political’s medium lived.

Will Economy come to terms with the “no-G” political order? Looks like…but…now comes the most interesting aspect, how will countries counter the slow progression of AI/automation from within. Can you foster innovation but resist automation? Both go hand in hand. Even if countries go no-G, the economy having seen the taste of low costs/high productivity, will keep investing in automation to offset the loss of the open field. This may take some time, but the shots are already fired.

It can be argued that automation will have limited impact on the larger economy. But the monster is still not out of the box! How will technology enable people from trading in freely?

How tangible and globalization-immune are your trade-able skills?

 

 

Use Case Returns on Investment ROI Type

 

References From the Web

  1. Top 10 books to make yourself a better Business Technologist (McKinsey & Co)

 

Towards a Sentient System

Let me ask you by a question: Can you know, how is your home value affected due to Trump’s winning of the GOP nomination? Is it going to go up or down and how much will be the time lag? What other factors could affect it in the meanwhile?

It’s an interconnected world no wonder! Everything is mostly interlinked to everything else. Actions have consequences elsewhere. Luckily, Systems thinking discipline comes handy in drawing some causal maps based on what we think are causes and effect. Treat these causal maps as hypotheses.

Traditionally, it was quite difficult to test these hypotheses due to a variety of reasons (such as lack of data, handling tools, real-time analytical computation tools etc.). Thankfully, we now have big data and analytic technologies. We can put these causal maps to test on reasonableness of conclusions they make. Multiple causal maps (from often participants with conflicts of interest) results in an interconnected system. A game theory based simulation could result in various outcomes that may mimic the real world happenings. A quick regression could assign weights to various links of this map. These weights obviously are re-evaluated whenever the prediction deviates “significantly” from the actual outcome.

The next step after testing the causal maps and their interactions is to set them free to let them react to the real world situations and see how closely the adapt to the world. Add newly discovered branches or pieces of another map to address the deviations.

I’ll leave it to your imagination where you can use such a map. Once you find a use and build a live learning model, just sit back and watch this cross-disciplinary (viz., Graph theory, Systems thinking, Analytics/Machine Learning and Game Theory) asset  brings forth the value to you.

Please share your thoughts!

Some more…

Use concepts from Systems thinking to define causal relationships between various entities (causal maps). Traditional big data analytics (e.g., regression) towards assigning weights to actions (“pseudo”-independent variables) resulting from the variety of signals (“pseudo”-dependent variables) propagating through the causal graph.

Any signal (be it US Treasury raising the interest rates OR I writing this post now) will have its own strong to a “butterfly”-like effect. The essence is to assign possibilities and probabilities to various outcomes (read actions by participants – correlates with “pseudo”-independent variables) arising due to this signal propagation.

You can learn some basics on Systems Thinking this MIT courseware here

Welcome to My “C” World

Welcome to my C World!

After years of procrastinating, I made my wandering mind to sit in a chair and host a website.

Of course, as with anyone else out there, you start with some domain name in mind and finally you end up with something else… I started with iWorld.net (Intelligent world) but ended up with a C world. C denotes a conscious world here, indirectly implying intelligence.

Now, coming to the intent of this site, a virtual home to my mind. My mind has been a hunter gatherer for quite a while. Hunting and gathering ideas and absorbing knowledge from this amazing world. Guess now it is time to settle down and start organizing my “precious” gatherings.

Going forward, I would like to build a “little village” where we can share and reflect back on our learnings.

Thank for your attention

-K

 

Welcome to My C World

A Home to compilations of my Conscious Learnings

Featured post

Create a free website or blog at WordPress.com.

Up ↑