Tag Archives: Artificial intelligence

It’s Still Early Days for AI

Neural networks expand far beyond feline photos
By Rick Merritt – “We need to get to real AI because most of today’s systems don’t have the common sense of a house cat!” The keynoter’s words drew chuckles from an audience of 3,000 engineers who have seen the demos of systems recognizing photos of felines.

There’s plenty of room for skepticism about AI. Ironically, the speaker in this case was Yann LeCun, the father of convolutional neural networks, the model that famously identified cat pictures better than a human.

It’s true, deep neural networks (DNNs) are a statistical method — by their very nature inexact. They require large, labeled data sets, something many users lack.

It’s also true that DNNs can be fragile. The pattern-matching technique can return dumb results when the data sets are incomplete and misleading results when they have been corrupted. Even when results are impressive, they are typically inexplicable.

The emerging technique has had its share of publicity, sometimes bordering on hype. The fact remains that DNNs work. Though only a few years old, they already are being applied widely. Facebook alone uses sometimes simple neural nets to perform 3×1014 predictions per day, some of which are run on mobile devices, according to LeCun.

Deep learning is with us to stay as a new form of computing. Its applications space is still being explored. Its underlying models and algorithms are still evolving, and hardware is trying to catch up with it all. more>

Moral technology

Self-driving cars don’t drink and medical AIs are never overtired. Given our obvious flaws, what can humans still do best?
By Paula Boddington – Artificial intelligence (AI) might have the potential to change how we approach tasks, and what we value. If we are using AI to do our thinking for us, employing AI might atrophy our thinking skills.

The AI we have at the moment is narrow AI – it can perform only selected, specific tasks. And even when an AI can perform as well as, or better than, humans at certain tasks, it does not necessarily achieve these results in the same way that humans do. One thing that AI is very good at is sifting through masses of data at great speed.

Using machine learning, an AI that’s been trained with thousands of images can develop the capacity to recognize a photograph of a cat (an important achievement, given the predominance of pictures of cats on the internet). But humans do this very differently. A small child can often recognize a cat after just one example.

Because AI might ‘think’ differently to how humans think, and because of the general tendency to get swept up in its allure, its use could well change how we approach tasks and make decisions. The seductive allure that tends to surround AI in fact represents one of its dangers. Those working in the field despair that almost every article about AI hypes its powers, and even those about banal uses of AI are illustrated with killer robots.

It’s important to remember that AI can take many forms, and be applied in many different ways, so none of this is to argue that using AI will be ‘good’ or ‘bad’. In some cases, AI might nudge us to improve our approach. But in others, it could reduce or atrophy our approach to important issues. It might even skew how we think about values.

We can get used to technology very swiftly. Change-blindness and fast adaptation to technology can mean we’re not fully aware of such cultural and value shifts. more>

Updates from Ciena

5G on stage in Barcelona

By Brian Lavallee – The theme of this year’s event is “Intelligent Connectivity” – the term we use to describe the powerful combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data”. This clearly highlights that important fact that 5G is more than just a wireless upgrade. It’s also about updating the entire wireline network from radios to data centers, where accessed content is hosted, and everything in between.

This means the move to broad 5G-based mobile services and associated capabilities will be a multi-year journey requiring many strategic partnerships.

The multi-year journey towards ubiquitous 5G services will understandably be the star at MWC, and rightfully so.

There remains uncertainty about what technologies and architecture should be used for specific parts of the end-to-end 5G mobile network, such as the often discussed (and often hotly debated) fronthaul space.

Early 5G mobile services are already being turned up in many regions in the form of early deployments, field trials, and proofs of concept. These services are delivered in 5G Non-Standalone (NSA) configuration, which essentially hangs 5G New Radios (NRs) off existing 4G Evolved Packet Core (EPC) networks. This allows for testing new 5G wireless technologies and jumpstarts critical Radio Frequency planning and testing.

It also means that most new wireline upgrades that are taking place now for 4G expansion and growth will also carry 5G wireless traffic to and from data centers. more>

Related>

Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.
By Janna Anderson, Lee Rainie and Alex Luchsinger – Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats.

As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. more>

The Future of Machine Learning: Neuromorphic Processors

By Narayan Srinivasa – Machine learning has emerged as the dominant tool for the implementation of complex cognitive tasks resulting in machines that have demonstrated, in some cases, super-human performance. However, these machines require training with a large amount of labeled data and this energy-hungry training process has often been prohibitive in the absence of costly super-computers.

The ways in which animals and humans learn is far more efficient, driven by the evolution of a different processor in the form of a brain that simultaneously optimizes energy of computation with efficient information processing capabilities. The next generation of computers, called neuromorphic processors, will strive to strike this delicate balance between efficiency of computation with the energy needed for this computation.

The foundation for the design of neuromorphic processors is rooted in our understanding of how biological computation is very different from the digital computers of today (Figure).

The brain is composed of noisy analog computing elements including neurons and synapses. Neurons operate as relaxation oscillators. Synapses are implicated in memory formation in the brain and can only resolve between three-to-four bits of information at each synapse. It is well known that the brain operates using a plethora of brain rhythms but without any global clock (i.e., clock free) where the dynamics of these elements operate in an asynchronous fashion. more>

AI’s Ethical Implications: The Responsibility Of Firms, Policymakers and Society?

By Frederick Ahen – The market for AI is massive.

The expertise needed in the field is growing exponentially; in fact, firms are unable to meet the demand for specialists. Contributions of AI to both advanced and emerging economies is significant and it is also powering other fields that once depended on manual labor with painstakingly slow processes.

For example, precision agriculture now uses drones to help irrigate and monitor plant growth, remove weeds and take care of individual plants. This is how the world is being fed.

Journalists are using drones to search for truth in remote areas. Driverless cars are being tested. Drones are doing wonders in the logistics and supply chain areas. But drones are also used for killing, policing and tracking down criminal activities.

There are many other advantages of AI in the health sector, elderly care and precision medicine. AI machines have the capacity to do things more efficiently than humans or even tread spaces that are more dangerous for humans.

This is the gospel. Take it or leave it.

But there is more to the above. What is also true is that ‘the world is a business’ and business is politics that controls science, technology and information dissemination. These three entities know how to subliminally manipulate, calm, manage and shape public sentiments about anything.

They control how much knowledge we can have and who can be vilified for knowing or speaking the truth, demanding an ethical approach to the production and use of AI or turned into a hero for spinning the truth.

So, the question is, which industrial policies will promote the proper use of AI for the greater good through ethical responsibility in the midst of profits, power, politics and polity? more>

Why Is the US Losing the AI Race?

By Chris Wiltz – AI is rapidly becoming a globally valued commodity. And nations that lead in AI will likely be the ones that guide the global economy in the near future.

“As AI technology continues to advance, its progress has the potential to dramatically reshape the nation’s economic growth and welfare. It is critical the federal government build upon, and increase, its capacity to understand, develop, and manage the risks associated with this technology’s increased use,” the report stated.

While the US has traditionally led the world in developing and applying AI technologies, the new report finds it’s no longer a given that the nation will be number 1 when it comes to AI. Witnesses interviewed by the House Subcommittee said that federal funding levels for AI research are not keeping pace with the rest of the industrialized world, with one witness stating: “[W]hile other governments are aggressively raising their research funding, US government research has been relatively flat.”

Perhaps not surprisingly, China is the biggest competitor to the US in the AI space. “Notably, China’s commitment to funding R&D has been growing sharply, up 200 percent from 2000 to 2015,” the report said.

AI’s potential threat to national security was cited as a key reason to ramp up R&D efforts. While there has yet to be a major hack or data breach involving AI, many security experts believe it is only a matter of time.

Cybersecurity companies are already leveraging AI to assist in tasks such as monitoring network traffic for suspicious activity and even for simulating cyberattacks on systems. It would be foolish to assume that malicious parties aren’t looking to take advantage of AI for their own gain as well. more>

Updates from Chicago Booth

The robots are coming, and that’s (mostly) a good thing
By Nicholas Polson and James Scott – We teach data science to hundreds of students per year, and they’re all fascinated by artificial intelligence. And they ask great questions.

How does a car learn to drive itself?

How does Alexa understand what I’m saying?

How does Spotify pick such good playlists for me?

How does Facebook recognize my friends in the photos I upload?

These students realize that AI isn’t some sci-fi droid from the future; it’s right here, right now, and it’s changing the world one smartphone at a time. They all want to understand it, and they all want to be a part of it.

And our students aren’t the only ones enthusiastic about AI. They’re joined in their exaltation by the world’s largest companies—from Amazon, Facebook, and Google in America to Baidu, Tencent, and Alibaba in China.

As you may have heard, these big tech firms are waging an expensive global arms race for AI talent, which they judge to be essential to their future.

Yet while this arms race is real, we think there’s a much more powerful trend at work in AI today—a trend of diffusion and dissemination, rather than concentration. Yes, every big tech company is trying to hoard math and coding talent. But at the same time, the underlying technologies and ideas behind AI are spreading with extraordinary speed: to smaller companies, to other parts of the economy, to hobbyists and coders and scientists and researchers everywhere in the world.

That democratizing trend, more than anything else, is what has our students today so excited, as they contemplate a vast range of problems practically begging for good AI solutions. more>

Related>

My Fair Data: How the Government Can Limit Bias in Artificial Intelligence

By Josh Sullivan, Josh Elliot, Kirsten Lloyd, and Edward Raff –
The rapid development of artificial intelligence (AI) holds great promise, but also potential for pitfalls. AI can change the way we live, work, and play, accelerate drug discoveries, and drive edge computing and autonomous systems. It also has the potential to transform global politics, economies, and cultures in such profound ways that the U.S. and other countries are set to enter what some speculate may be the next Space Race.

We are just beginning to understand the implications of unchecked AI. Recent headlines have highlighted its limitations and the continued need for human control. We will not be able to ignore the range of ethical risks posed by issues of privacy, transparency, safety, control, and bias.

Considering the advances already made in AI—and those yet to be made—AI is undoubtedly on a trajectory toward integration into every aspect of our lives. As we prepare to turn an increasing share of tasks and decision-making over to AI we must think more critically about how ethics factor into AI design to minimize risk. With this in mind, policymakers must proactively consider ways to incorporate ethics into AI practices and design incentives that promote innovation while ensuring AI operates with our best interests in mind. more>

How to govern AI to make it a force for good

In the interview, Gasser identifies three things policymakers and regulators should consider when developing strategies for dealing with emerging technologies like AI.
Urs Gasser – “Everyone is talking about Artificial Intelligence and its many different applications, whether it’s self-driving cars or personal assistance on the cell phone or AI in health,” he says. “It raises all sorts of governance questions, questions about how these technologies should be regulated to mitigate some of the risks but also, of course, to embrace the opportunities.”

One of the largest challenges to AI is its complexity, which results in a divide between the knowledge of technologists and that of the policymakers and regulators tasked to address it, Gasser says.

“There is actually a relatively small group of people who understand the technology, and there are potentially a very large population affected by the technology,” he says.

This information asymmetry requires a concerted effort to increase education and awareness, he says.

“How do we train the next generation of leaders who are fluent enough to speak both languages and understand engineering enough as well as the world policy and law enough and ethics, importantly, to make these decisions about governance of AI?”

Another challenge is to ensure that new technologies benefit all people in the same way, Gasser says.

Increasing inclusivity requires efforts on the infrastructural level to expand connectivity and also on the data level to provide a “data commons” that is representative of all people, he says. more>