When I was very young, I remember my grandmother interceding in a heated financial argument between my parents by loudly lamenting “get rid of that damn calculator, it’s ruining everything!” That’s the earliest example I can remember when a tool, rightly or wrongly, jammed up a process. I’m pretty sure that the calculator didn’t give incorrect results, but that it was my parents who were applying it incorrectly to the financial problem.
My, how tech tools have changed. Yet how they’ve retained the blame!
Satisfaction VS dissatisfaction is often in setting expectations
I’m struck (and sometimes scared) how often results and recommendations from big data and machine learning applications are not used merely as “signals” to people addressing a problem — they are not looked at as the data points or opinions that they are — but rather sometimes the answers supplied are inappropriately looked upon as “the” answer in black and white. We may get to machine learning and AI nirvana eventually, but we’re not quite there yet.
In the examples shared below, if expectations and education about product capabilities were set properly, customers could be better satisfied, even with today’s imperfect solutions. Product marketers need to educate markets and press (both general and vertical), salespeople, and individual users as a part of the customer life cycle. It’s up to us to set the context for the proper use of the products as they evolve and help set expectations for what users should and shouldn’t rely on when decision “signals” are provided.
Marketing leaders should provide enablement and ethics around machine learning
How can we as marketers teach people about what to expect and how to apply the results of today’s machine learning solutions that will guide people in decision making? We should embed how we talk about and train people on these solutions into an educational marketing approach. We need to have this approach for two big reasons:
- For selfish reasons — We’ll have customers whose expectations are set well and are happy and grow with us.
- For ethical reasons — We’ll have fewer people quickly deferring 100% of their judgement to software by better understanding what the software really is and isn’t doing for their process.
Here are four things that we as marketing and product leaders should be sure to emphasize at the right level of abstraction across personas throughout the customer life cycle.
1. Understand the current state of the art of what the systems can accomplish
Just this week, CNN and other news agencies reported on an article published in Nature that indicated initial results of artificial intelligence systems being able to detect breast cancer using mammography screenings better than human radiologists can. The promising and exciting results in this solution (and others similar to it) are due to the scope of the problem and the size and accuracy of the data sets used to build the model.
It’s important to note, as explicitly stated in the abstract of the Nature article:
“We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.”
I read this as an optimistic indication that visual machine learning systems in this space can (and will?) be an important part of how we improve accuracy of breast cancer detection through mammogram readings. But I don’t read this, at least yet, as machines being completely capable of taking over this task. These systems will be tools for doctors to improve the workflow, accuracy, and speed of detection via mammograms.
In this example, and the ones below, it is important for us as marketers to help people understand what the system can do, in what context, and with what accuracy, beyond the exciting “headlines.”
2. Become an essential part of customer education in the product life cycle
I was most recently at an Ed Tech company that uses big data, machine learning, and interesting user interfaces to provide guidance to teachers and students in a number of areas. It is most famous for “plagiarism detection” though its solutions go well beyond that.
Without breaching any NDAs, I can tell you that as recently as over the Christmas break, I had student relatives and friends tell me that their instructors misapply the “similarity scores” shown by the software without enough “human context” — to the massive frustration of my student relatives. My former employer goes to great trouble to educate educational institutional buyers about what a “similarity score” can mean and how to properly use it in the context of various instructional goals at a variety of grade levels.
And yet… The small number of instructors who use the “similarity score” number as a rule, not a just signal, out of context, and without the recommended further exploration of plagiarism cues and details, is still too big.
As product marketers, we must be able to better explain what big data and machine learning can and can’t do and what people need to do to best utilize our offerings.
3. Ensure a UX (user experience) includes explanations and real-time feedback
Heuristic based detection of malware and viruses have really improved dramatically over the past few years. Analysis of writing (both in real-time and after submission) continues to expand, with examples as simple as suggestions of what you may want to write next, all the way to providing formative suggestions to improve a long piece of writing. Identification of faces in photographs, data built up around what you may want to buy or watch next, suggestions about who you may want to “friend” on a social network or what job you should apply to — this is all happening now. And we as marketing leaders have some responsibility to educate and inform in-product and in-market.
The human interfaces to these approaches needs to be thoughtful, and provide immediate and actionable and transparent and hopefully unobtrusive feedback. For example, if I have decided to encrypt all of my spreadsheets in a folder on my computer, perhaps my malware detection software thinks that nefarious ransomware may have started this process. But, in this case, it wasn’t ransomware. I actually wanted to encrypt these files! The user interface had better let me do what I want to do and not “automatically decide” I can’t and provide some kind of non-obtrusive user interaction. (And also clearly explain to me if I am indeed under a malware attack!)
In another example, suppose a piece of writing I am working on within an educational solution has an interface that gives me some well-meaning advice, based on machine learning and a huge database of instructor feedback on similar writing assignments — but that advice was flat out wrong in my case. Or what if my writing was in fact “wrong” because I was using a creative outlier in my writing approach (unique formatting, poetic approach, broken-on-purpose grammar rule, etc.) that made it unusual, but awesome? When I ignore the advice generated, will I get “dinged” for it on my grade? Do I have an opportunity to reach out to my teacher? Do I have an opportunity to “train” the software on why my approach was not, in fact, wrong?
I need to know as a user that I can have an impact on the system if it is doing something it shouldn’t be doing.
4. Persona modeling that informs data sets that reflect humanity
In machine learning, it’s all about the depth and validity of the data sets. There has been a lot of writing recently on embedding biases into machine learning systems. We don’t have to go to far to imagine image processing algorithms that don’t correctly account for skin tone. Voice analysis tools used in public speaking improvement and grading systems that don’t account for a brilliant person who stutters or a person with has a vocal slur from an injury. Or video analysis tools that ding a genius presenter on her style because she has facial paralysis.
How would one of the best Ted talks been judged by a system that was rating body language and stage movement that was not trained for the excellence that the late Stella Young brought to all of us?
As product marketers, managers, and leaders we have to ensure that the approaches we use don’t just let people who are “outliers” (and I use that term begrudgingly) opt-out and use other workflows. (Opting-out and “call us if you need another way” in and of itself is exclusionary.) Rather, it’s up to us to advocate for the design of tools based on data sets that reflect as much of humanity as we can cover and that we account for decision recommendations and signals to be provided with as little bias as possible.
It’s not that we shouldn’t use machine learning. It’s how we do it that counts.
It’s not that we shouldn’t use machine learning and big data to help solve problems better and faster. I’m a firm believer that on the whole they will help humanity greatly. But without addressing the marketing and communication challenges listed here, and especially the challenging ethical persona modeling and data collection approaches implied by number 4, we may miss out on the next big thing or next amazing person.
Let’s not discourage our next student from being a world class author. Let’s not miss the next “genius hire” by misusing the signals from an automated text or video hiring tool. (I’m bothered when I see an awesome resume from a creative applicant smashed and boiled down into simple text so it can be parsed by an old-school applicant tracking system).
It’s our responsibility as marketing and product leaders to ensure that these tools reflect the level of humanity that we all deserve. It’s up to us as product marketers and managers to help our teams develop and deliver big data and machine learning solutions and data sets that address all of humanity – and have user experiences that educate people and have opportunities for the system to grow and learn and adapt based on new situations.
Thanks for reading
(Yes, as of 04-Jan-2020 I am looking for a greater Boston and/or remote position. Please check me out here and on LinkedIn at https://www.linkedin.com/in/garymdietz/ )