Wharton Professor Kartik Hosanagar is the author of A Human’s Guide to Machine Intelligence |
Wharton Business School Professor Kartik Hosanagar’s new book, 'A Humans Guide to Machine Intelligence,’ examines how algorithms and artificial intelligence are starting to run every aspect of our lives.
A tech entrepreneur, Hosanagar co-founded and developed the core IP for all-in-one digital marketing platform Yodle Inc, which was acquired by Web.com. He is also a co-founder of SmartyPal Inc, which is building personalized game-based learning technology.
Hosanagar, graduated at the top of his class in BITS, Pilani and has a PhD in Management Science and Information Systems from Carnegie Mellon University.
“Indian education has an excellent emphasis on technical rigor. That gives you a strong foundation for careers that require such technical rigor like engineering, programming, and data science,” said Hosanagar.
“The US education system emphasizes creativity and independent thinking. It teaches you well how to think on your feet and how to do something new that hasn’t been done. Both approaches have merits and the ideal world would blend the two in good proportions,” he added.
One of the world’s top digital business professors, Hosanagar sat down for a freewheeling interview with BrainGain Magazine. Excerpts:
Is there a premium in going to a US college to study topics like data structures and algorithms, key languages, and program design?
As of now, there is a huge demand and not enough supply of people trained in Data Science. Data Science includes statistics as well as knowledge of programming and artificial Intelligence (AI). For the next several years, demand for Data Science talent will far exceed the supply. So, there will be a big premium for people with those skills.
As the John C Hower Professor of Technology and Digital Business and a professor of marketing at the Wharton School can you give us an overview of the highly-sought course you teach?
The course I teach, called Enabling Technologies, provides a broad overview of what’s going on in the tech industry. Conducting business in a networked economy invariably involves interplay with technology. The purpose of the course is to improve understanding of technology (what it can or cannot enable), the business drivers of technology-related decisions in firms, and to stimulate thought on new applications. The course has traditionally been taken by students who are interested in a career in tech. Increasingly, students with many other interests have started taking the course given how central tech is to so many industries and job functions.
You are an author, entrepreneur, and authority on the digital economy, particularly the impact of analytics and algorithms on consumers and society. You’ve also been recognized as one of the world’s top 40 business professors under 40. Can you share your secret to time management?
Ultimately, there is no secret to it beyond hard work and prioritization. Hard work is about focusing and getting the most out of the limited hours available. Prioritization is about identifying the activities that will have the most impact and focusing on them over less impactful activities.
What inspired you to write A Human’s Guide to Machine Intelligence?
I spend my days helping students understand technology; designing and analyzing studies that probe algorithms' impact on the world; and writing code myself. And while my subject gets a lot of attention in popular journalism, I feel the public lacks the right mental models to understand algorithms and AI, and as a result the conversation is too fear-oriented, at the expense of being solution-oriented. This is my attempt to address these problems and start a conversation on what the solution should look like.
What is an algorithm, how do they work and why do they occasionally go rogue?
Algorithms are simply a series of steps a software application follows to get a task done. Every software application you use has an algorithm that determines what steps it takes when the user does something. You might follow a series of steps to make an omelette. You would usually call it an omelette recipe. But you could just as well call it an omelette algorithm.
We tend to think of algorithms as objective decision-makers, but they are in fact prone to many of the same biases we associate with humans. A recent example is the use of algorithms in US courtrooms to compute risk scores such as a defendant’s risk of reoffending. These scores are then used by judges, parole, and probation officers to make criminal sentencing, bail, and parole decisions. Recent research shows that these algorithms were biased against black defendants. Other examples include sexist resume screening algorithms used by recruiters, social media newsfeed algorithms that promoted fake news stories around elections, and many more.
An important moment of insight for me as I was writing this book came from realizing that we can better understand the causes of rogue algorithmic actions by looking at what drives problematic human behavior. In psychology and genetics, human behavior is often attributed to the genes we are born with and our environmental influences —the classic nature-versus-nurture argument. It turns out that algorithms are no different. The actions and behaviors of early computer algorithms were fully programmed by their human creators. This was their nature.
However, modern algorithms also learn big chunks of their logic from real-world data. Much as a child observes and learns from her environment, modern algorithms learn how to drive cars and chat with people by observing humans doing the same tasks. This is their nurture and it is starting to become more important as AI is being used more and more. So, many of the biases and unpredictable behaviors of modern algorithms are picked up from the data on which AI algorithms are trained. In short, our algorithms are hanging out with bad data!
Have algorithms come to increasingly rule the world?
Algorithms touch our lives every day, from how we choose products to purchase (Amazon’s “People who bought this also bought”) and movies to watch (NetFlix’s recommendations) to whom we date or marry (Match.com or Tinder matches). In our imagining, we generally nod politely at recommendations by algorithms and make our own choices. But that’s not the reality. Consider these facts: 80% of viewing hours streamed on Netflix originate from automated recommendations. By some estimates nearly 35% of sales at Amazon originate from automated recommendations. And the clear majority of matches on dating apps like Tinder are initiated by algorithms.
Algorithms are also advancing beyond their original decision-support role of offering suggestions to becoming autonomous systems that make decisions on our behalf. For example, they can invest our savings and even drive cars on their own. They are also a part of the workplace – for example, advising insurance agents on how to set premiums and helping recruiters shortlist job applicants. There are very few decisions we make these days that aren’t touched by algorithms.
Since bad code can, and does affect the lives of billions how important is it for colleges to tech ethics alongside core courses like computer science, algorithms, computer architecture, neural networks, and data structures?
It is extremely important for everyone to have a functional understanding of how software systems make decisions for us. This will include a basic understanding of algorithms as well as AI. Not everyone needs to know in depth how algorithms and AI function but we all need to understand the big picture. This includes an understanding of the social and economic implications of automation and AI and broader tech ethics. In recent years, there has been a lot of talk about how programming knowledge is fundamental and needs to be part of school curriculum. They are right, but it’d be a mistake to focus only on programming without basic AI and algorithm literacy.
Today, algorithms and AI play a big part in medicine. How are they involved in diagnoses and treatment? Are algorithmic and AI diagnoses more accurate and reliable than those of a physician?
Algorithms are coming up rapidly to automate some of the diagnosis and treatment processes. While the newer machine learning-based algorithms perform better than the previous generation, the hype surrounding them often exceeds their actual abilities. The notion that machine is now beating doctors in areas ranging from oncology to diabetes management is rife among journalists, but the facts do not bear this out.
When you say AI trumps doctors, you must ask whether it’s a task that doctors even do. And in many hyped-up cases of AI beating doctors, I have found that it is not. There is no doubt that in some settings such as radiology, AI is doing incredibly well and reducing our need to train as many radiologists. In most other settings, they are automating a small subset of tasks that doctors perform. This will help doctors become more efficient, and free up more of their time to focus on patients. But we shouldn’t expect AI to replace the doctor anytime soon.