Defining the Future of Artificial Intelligence

2018 was a benchmark year for AI. But it’s up to humans to steer the technology toward a culture of ethics & accountability.

objectdetect-e1548090101876

What will the rise of artificial intelligence (AI) mean for humanity? Theories typically fall into one of two extremes: AI is either hailed as the greatest innovation of the modern age, destined to exponentially expand human consciousness; or it is condemned as representing the looming end of human agency. Even in its formative years, it is clear that AI holds more disruptive potential than any technology that came before it.

2018 was a benchmark year for the technology. AI further permeated the consumer world alongside advances in natural voice processors, autonomous vehicles, and personal assistants. But more importantly, the tech community took major steps to ensure AI innovations prevent the feared dystopia rather than lean into it.

A particularly high-profile debate occurred surrounding Project Maven, the Pentagon program that uses machine learning to develop object detection and classification capabilities for the U.S. military. In June 2018, Google decided not to renew its contract associated with Project Maven following outspoken protests from thousands of employees concerned Google’s AI capabilities were being used for tracking and targeting in overseas drone strikes. That month, the company released a code of ethics establishing that Google would not design AI for use in weapons systems.

Because there is little to no legislative regulation surrounding the technology, U.S. companies have been left to form internal AI ethics teams to guide decisions as well as to inform employees and stockholders. Where most tech giants, including Facebook, Microsoft, and IBM, are striving to stay on the right side of public AI sentiment, Amazon has remained committed to serving military and law enforcement customers. In May 2018, apprehensive Amazon employees highlighted the potential of facial recognition software as a mass surveillance tool, making comparisons to Orwellian science fiction. That same month, a business coalition led by the American Civil Liberties Union demanded Amazon stop offering its Rekognition tool to government agencies, citing data privacy infractions, but so far the company has not caved to such pressure.

The public is even more suspicious of these applications considering early AI’s noted biases against certain demographics. The majority of AI tools reflect the social slants of their creators—in most cases, white men. A paper by MIT and Stanford researchers found that facial recognition tools from Microsoft and IBM were significantly less accurate when applied to dark-skinned, female subjects. The implications of such biased systems being activated in police forces, for example, are dubious. Facial recognition providers are rushing to self-correct by adding diverse training datasets, aiming to eliminate biases before AI is widely employed in contexts where human lives are at stake.

Instead of viewing these ethical lapses as problems with the technology itself, legislators should use them as starting points to develop regulations for the use of nascent but powerful AI tools. Widespread adoption won’t happen without the trust of the general public, but most people will remain apprehensive until they are certain life with the technology is at least as safe as life without.

Autonomous Vehicles

This push-and-pull is also playing out in the field of autonomous vehicles (AV), which many predict will be a gateway technology introducing unmanned systems to the masses. Tragically, 2018 marked the first fatality involving a fully autonomous car. Uber, which owned the vehicle, responded by pulling all of its AVs from public roads nationwide and launching company-wide efforts to ensure driver and pedestrian safety. Sparse policies in AV testbed states like Pennsylvania, Arizona, and California were largely unaffected by the incident. Seven months later, Uber resumed road tests with heightened safety measures—each test car now cruises around under the supervision of two human back-up drivers.

The fatal crash indicates AVs are not yet ready to hit roadways on a commercial scale. However, the public may embrace autonomous driving if accidents are proven to occur at a lower rate than with human operators. For example, there were more than 37,000 deaths in traffic-related incidents in 2017. When considering the potential of advanced AVs to decrease traffic, dismantle the personal vehicle standard, and cut carbon emissions, this is one area in which continued research could result in positive societal change.

Trust is slowly building. In Chandler, Ariz., the test site for Google sister company Waymo’s autonomous minivans, residents are so accustomed to driverless cars they’re referred to as “boring” and “white noise.” An HNTB survey reports 60 percent of millennials would trust an AV to transport them safely.

The Future of Cognitive Freedom 

This phenomenon of willingly relinquishing control to artificially intelligent programs has led to philosophical fears about the future of human decision-making. Some believe using AI to inform decisions today marks the beginning of the end of cognitive freedom. They ask: How long before AI becomes more intelligent than humans? After that point of singularity, will we leave all decisions up to algorithms?

A Pew study polling more than 900 AI experts in summer 2018 suggests ways to help the world adopt AI in ways that don’t threaten human autonomy. Among them are: “developing policies to assure that development of AI will be directed at augmenting humans and the common good; and shifting the priorities of economic, political, and education systems to empower individuals to stay ahead in the ‘race with the robots.’”

One of the first such policies was set on an international stage in summer 2018 when the European Union enacted its General Data Protection Regulation (GDPR). In addition to establishing protections on personal data collection, GDPR stipulates companies must be able to explain the logic behind automated decision-making.

In the U.S., there remains a significant movement within the commercial sector to self-impose regulations ahead of legislation. More than 80 companies have joined the Partnership on AI consortium, which aims to ensure beneficial and responsible AI use. Harvard, MIT, NYU, and others are leading the charge in academia, offering courses on the ethics of AI and deep learning that encourage young developers to actively combat bias and to innovate with progressive use cases in mind.

Photo Credit: Amazon Rekognition Video

Maps for a Vaccine Distribution

GIS mapping capabilities are essential to an equitable and speedy distribution of a COVID-19 vaccine

, ,

A National Strategy for Critical and Emerging Technologies

New White House strategy outlines ways to protect the nation's competitive edge in world-changing emerging technologies

, ,

Measuring the Earth’s Magnetic Field

NGA called upon solvers to submit novel approaches to geomagnetic data collection for WMM