Thứ Tư, 18 tháng 1, 2017

Machines for Navigating a Nuanced World

Whether or not you agree with the outcomes of the Brexit vote or the U.S. elections, in both cases, traditional models for predicting voter behavior clearly broke down. The factors that contributed to these electoral surprises indicate a profound shift in how people make decisions, with broad implications for enterprises.

Whether we aim to influence consumers to purchase a product or to guide employees to execute a new business strategy, we need a more nuanced understanding of human behavior and opinion. To give these their due requires us to consider more data and conduct deeper analyses.

To start, let’s consider what appears to have confounded some pollsters:

  • Measurement errors: Analysts struggled to predict which voters would be most likely to go to the polls.
  • Undetected decision criteria: Voters surprised pollsters by choosing sides based on their wish to be in the winning faction rather than their perceptions of the costs and benefits of each position.
  • Culture jamming: Techniques for disrupting mainstream media with fake news, along with rising distrust in government and other institutions, challenged pollsters’ ability to pinpoint voter preferences.
  • Micro-targeting: The ability for campaigns to target each person’s with messages specially tuned to that person’s preferences and behavior made it difficult to ascertain the extent to which information broadcast to the general public influenced voters’ choices.

Each of these phenomena also affects how successful we are at modeling people’s behavior toward our businesses and in society at large. We need machine learning to help us derive a more individualized, behavioral view of the customers, employees, and partners that we are trying to influence.

We are just beginning to comprehend how to use machine learning, as well as to ascertain the practical and ethical limits to doing so. The quality of the algorithms we use to predict and respond to human behavior needs to improve. However, the initial advances indicate that machine learning, to a large degree, is the key to harnessing our understanding about how to influence people and their decisions in consumer, business, or social contexts.

That’s why we’ve dedicated the Q1 2016 issue of Digitalist Magazine, Executive Quarterly to questions at the forefront of machine learning in the enterprise. In “Empathy Machine,” our cover story, we explore the progress toward artificial intelligence that can read and respond to human emotions—the killer app for the digital economy. Our feature “An AI Shares My Office” cuts through some of the noise about work automation to uncover the ways that humans and machines will coexist in the workforce for the foreseeable future. We also look at the critical issue of how machine learning can help humans address the issue of bias—both conscious and unconscious—in their decision making. The feature “The End of Bias?” and our Thinkers interview with data scientist Cathy O’Neil probe the potential for substantially reducing the amount of unfair and unproductive bias in the world. We close with ideas from SAP chief innovation officer Juergen Mueller about how to apply machine learning across the enterprise.

Machine learning is ready to be treated seriously as a source of business opportunity as well as a management platform. We offer this issue of Digitalist Magazine to contribute to and advance that discussion.

Articles from the Q1 2017 issue of the Digitalist Magazine, Executive Quarterly include:

PROFIT (Feature Stories)

  • Empathy Machine:  By Markus Noga, Chandran Saravana and Stephanie Overby – In the race to build the most effective digital organization, the ability to understand and respond to human emotion may ultimately be the key differentiator. A subset of Artificial Intelligence, called “affective computing,” is poised to make a big impact; it reads and responds to our emotions, making it the killer app of the digital economy. This cover story examines how companies can take advantage of this emerging technology – and the limitations and ethical issues they will need to navigate in the process.
  • An AI Shares My Office: by Erik Marcade, Dinesh Sharma, Sam Yen, Markus Noga, and Chandran Saranava – For all the fears of AI taking our jobs, the reality will be less dramatic. In fact, AIhuman collaboration, rather than outright replacement, is the future of work. AI-based tools will very likely be colleagues who literally share your office. As part of this, big technological and cultural shifts inside companies are coming: the corporate structure as we know it could disappear. Creativity and problem solving will become the highest-valued human abilities. Asking the right questions will become much more important, and the next crop of executives will be given the tools to work with AI early on – starting in Business school.
  • The End of Bias?: Yvonne Baur, Brenda Reid and Steve Hunt – AI has the potential to help us avoid harmful human bias – both intentional and unconscious — in hiring, operations, customer service, and the broader business and social communities. Doing so makes good business sense. Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact.


  • How Machine Learning Enables the Intelligent Enterprise by Juergen Mueller, Chief Innovation Officer, SAP – Public attention to machine learning often focuses on consumer applications such as recommendation engines and smart devices. But it also holds great promise for B2B uses. A look at where Machine Learning shows promise inside an Intelligent Enterprise, and what business leaders must do to embrace the opportunity. “In time,” Mueller says, “machine learning will be like electricity to us – we’ll find it hard to imagine the world without it.”


  • Digital Devices to Enhance Nanotechnology – devices for the digital economy which offer: more powerful and efficient Power Sources; an ability to Detect diseases quickly, efficiently and inexpensively; and Treatment for infected water supplies and diseases.






  • Unmasking Unconscious Bias in Algorithms – author, former math professor turned hedge fund data scientist, Kathy O’Neil, also known as “Mathbabe” – discusses how to create accountability for mathematical models that businesses use to make critical decisions.



Jeff Woods is the VP of Marketing Strategy and Thought Leadership Marketing at SAP. 

via SAP News Center

Không có nhận xét nào:

Đăng nhận xét