Tag Archives: Artificial intelligence

3 Ways AI Projects Get Derailed, and How to Stop Them

The rate of companies implementing AI is continuing to skyrocket. Don’t fall victim to wasted time and a blown budget.
By Don Roedner – In the blink of an eye, AI has gone from novelty to urgency.

Tech leaders are telling companies they need to adopt AI now or be left behind. And a recent Gartner survey shows just that: AI adoption has skyrocketed over the last four years, with a 270 percent increase in the percentage of enterprises implementing AI during that period.

However, the same survey shows that 63 percent of organizations still haven’t implemented AI or machine learning (ML) in some form.

Why are there so many organizations falling behind the curve?

We meet with companies every week that are in some stage of their first ML project. And sadly, most of the conversations go more or less the same way. The project is strategic and highly visible within the organization. The internal proof of concept went off without a hitch. Now, the team is focused on getting the model’s level of confidence to a point where it can be put into production.

It’s at this point – the transition from proof of concept to production software development – that the project typically runs into big trouble. When we first meet with data science teams, their budget is often dwindling, their delivery deadline is imminent, and their model is still underperforming.

Sound familiar? The guidelines below might help your organization get its AI model to production on time without blowing your budget. more>

Updates from ITU

AI for Good’ or scary AI?
By Neil Sahota and Michael Ashley – Some futurists fear Artificial Intelligence (AI), perhaps understandably. After all, AI appears in all kinds of menacing ways in popular culture, from the Terminator movie dynasty to homicidal HAL from 2001: A Space Odyssey.

Though these movies depict Artificial General Intelligence (AGI) gone awry, it’s important to note some leading tech scholars, such as George Gilder (author Life After Google), doubt humans will ever be able to generate the sentience we humans take for granted (AGI) in our machines.

As it turns out, the predominant fear the typical person actually holds about AI pertains to Artificial Narrow Intelligence (ANI).

Specialized, ANI focuses on narrow tasks, like routing you to your destination — or maybe one day driving you there.

Much of what we uncovered when cowriting our new book, Own the A.I. Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition, is that people fear narrow task-completing AIs will take their job.

“It’s no secret many people worry about this type of problem,” Irakli Beridze, who is a speaker at the upcoming AI For Good Global Summit and heads the Centre for Artificial Intelligence and Robotics at the United Nations Interregional Crime and Justice Research Institute, told us when interviewed for the book.

“One way or another, AI-induced unemployment is a risk we cannot dismiss out of hand. We regularly see reports predicting AI will wipe out 20 to 70 percent of jobs. And we’re not just talking about truck drivers and factory workers, but also accountants, lawyers, doctors, and other highly skilled professionals.” more>

Related>

It’s Still Early Days for AI

Neural networks expand far beyond feline photos
By Rick Merritt – “We need to get to real AI because most of today’s systems don’t have the common sense of a house cat!” The keynoter’s words drew chuckles from an audience of 3,000 engineers who have seen the demos of systems recognizing photos of felines.

There’s plenty of room for skepticism about AI. Ironically, the speaker in this case was Yann LeCun, the father of convolutional neural networks, the model that famously identified cat pictures better than a human.

It’s true, deep neural networks (DNNs) are a statistical method — by their very nature inexact. They require large, labeled data sets, something many users lack.

It’s also true that DNNs can be fragile. The pattern-matching technique can return dumb results when the data sets are incomplete and misleading results when they have been corrupted. Even when results are impressive, they are typically inexplicable.

The emerging technique has had its share of publicity, sometimes bordering on hype. The fact remains that DNNs work. Though only a few years old, they already are being applied widely. Facebook alone uses sometimes simple neural nets to perform 3×1014 predictions per day, some of which are run on mobile devices, according to LeCun.

Deep learning is with us to stay as a new form of computing. Its applications space is still being explored. Its underlying models and algorithms are still evolving, and hardware is trying to catch up with it all. more>

Moral technology

Self-driving cars don’t drink and medical AIs are never overtired. Given our obvious flaws, what can humans still do best?
By Paula Boddington – Artificial intelligence (AI) might have the potential to change how we approach tasks, and what we value. If we are using AI to do our thinking for us, employing AI might atrophy our thinking skills.

The AI we have at the moment is narrow AI – it can perform only selected, specific tasks. And even when an AI can perform as well as, or better than, humans at certain tasks, it does not necessarily achieve these results in the same way that humans do. One thing that AI is very good at is sifting through masses of data at great speed.

Using machine learning, an AI that’s been trained with thousands of images can develop the capacity to recognize a photograph of a cat (an important achievement, given the predominance of pictures of cats on the internet). But humans do this very differently. A small child can often recognize a cat after just one example.

Because AI might ‘think’ differently to how humans think, and because of the general tendency to get swept up in its allure, its use could well change how we approach tasks and make decisions. The seductive allure that tends to surround AI in fact represents one of its dangers. Those working in the field despair that almost every article about AI hypes its powers, and even those about banal uses of AI are illustrated with killer robots.

It’s important to remember that AI can take many forms, and be applied in many different ways, so none of this is to argue that using AI will be ‘good’ or ‘bad’. In some cases, AI might nudge us to improve our approach. But in others, it could reduce or atrophy our approach to important issues. It might even skew how we think about values.

We can get used to technology very swiftly. Change-blindness and fast adaptation to technology can mean we’re not fully aware of such cultural and value shifts. more>

Updates from Ciena

5G on stage in Barcelona

By Brian Lavallee – The theme of this year’s event is “Intelligent Connectivity” – the term we use to describe the powerful combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data”. This clearly highlights that important fact that 5G is more than just a wireless upgrade. It’s also about updating the entire wireline network from radios to data centers, where accessed content is hosted, and everything in between.

This means the move to broad 5G-based mobile services and associated capabilities will be a multi-year journey requiring many strategic partnerships.

The multi-year journey towards ubiquitous 5G services will understandably be the star at MWC, and rightfully so.

There remains uncertainty about what technologies and architecture should be used for specific parts of the end-to-end 5G mobile network, such as the often discussed (and often hotly debated) fronthaul space.

Early 5G mobile services are already being turned up in many regions in the form of early deployments, field trials, and proofs of concept. These services are delivered in 5G Non-Standalone (NSA) configuration, which essentially hangs 5G New Radios (NRs) off existing 4G Evolved Packet Core (EPC) networks. This allows for testing new 5G wireless technologies and jumpstarts critical Radio Frequency planning and testing.

It also means that most new wireline upgrades that are taking place now for 4G expansion and growth will also carry 5G wireless traffic to and from data centers. more>

Related>

Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.
By Janna Anderson, Lee Rainie and Alex Luchsinger – Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats.

As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. more>

The Future of Machine Learning: Neuromorphic Processors

By Narayan Srinivasa – Machine learning has emerged as the dominant tool for the implementation of complex cognitive tasks resulting in machines that have demonstrated, in some cases, super-human performance. However, these machines require training with a large amount of labeled data and this energy-hungry training process has often been prohibitive in the absence of costly super-computers.

The ways in which animals and humans learn is far more efficient, driven by the evolution of a different processor in the form of a brain that simultaneously optimizes energy of computation with efficient information processing capabilities. The next generation of computers, called neuromorphic processors, will strive to strike this delicate balance between efficiency of computation with the energy needed for this computation.

The foundation for the design of neuromorphic processors is rooted in our understanding of how biological computation is very different from the digital computers of today (Figure).

The brain is composed of noisy analog computing elements including neurons and synapses. Neurons operate as relaxation oscillators. Synapses are implicated in memory formation in the brain and can only resolve between three-to-four bits of information at each synapse. It is well known that the brain operates using a plethora of brain rhythms but without any global clock (i.e., clock free) where the dynamics of these elements operate in an asynchronous fashion. more>

AI’s Ethical Implications: The Responsibility Of Firms, Policymakers and Society?

By Frederick Ahen – The market for AI is massive.

The expertise needed in the field is growing exponentially; in fact, firms are unable to meet the demand for specialists. Contributions of AI to both advanced and emerging economies is significant and it is also powering other fields that once depended on manual labor with painstakingly slow processes.

For example, precision agriculture now uses drones to help irrigate and monitor plant growth, remove weeds and take care of individual plants. This is how the world is being fed.

Journalists are using drones to search for truth in remote areas. Driverless cars are being tested. Drones are doing wonders in the logistics and supply chain areas. But drones are also used for killing, policing and tracking down criminal activities.

There are many other advantages of AI in the health sector, elderly care and precision medicine. AI machines have the capacity to do things more efficiently than humans or even tread spaces that are more dangerous for humans.

This is the gospel. Take it or leave it.

But there is more to the above. What is also true is that ‘the world is a business’ and business is politics that controls science, technology and information dissemination. These three entities know how to subliminally manipulate, calm, manage and shape public sentiments about anything.

They control how much knowledge we can have and who can be vilified for knowing or speaking the truth, demanding an ethical approach to the production and use of AI or turned into a hero for spinning the truth.

So, the question is, which industrial policies will promote the proper use of AI for the greater good through ethical responsibility in the midst of profits, power, politics and polity? more>

Why Is the US Losing the AI Race?

By Chris Wiltz – AI is rapidly becoming a globally valued commodity. And nations that lead in AI will likely be the ones that guide the global economy in the near future.

“As AI technology continues to advance, its progress has the potential to dramatically reshape the nation’s economic growth and welfare. It is critical the federal government build upon, and increase, its capacity to understand, develop, and manage the risks associated with this technology’s increased use,” the report stated.

While the US has traditionally led the world in developing and applying AI technologies, the new report finds it’s no longer a given that the nation will be number 1 when it comes to AI. Witnesses interviewed by the House Subcommittee said that federal funding levels for AI research are not keeping pace with the rest of the industrialized world, with one witness stating: “[W]hile other governments are aggressively raising their research funding, US government research has been relatively flat.”

Perhaps not surprisingly, China is the biggest competitor to the US in the AI space. “Notably, China’s commitment to funding R&D has been growing sharply, up 200 percent from 2000 to 2015,” the report said.

AI’s potential threat to national security was cited as a key reason to ramp up R&D efforts. While there has yet to be a major hack or data breach involving AI, many security experts believe it is only a matter of time.

Cybersecurity companies are already leveraging AI to assist in tasks such as monitoring network traffic for suspicious activity and even for simulating cyberattacks on systems. It would be foolish to assume that malicious parties aren’t looking to take advantage of AI for their own gain as well. more>

Updates from Chicago Booth

The robots are coming, and that’s (mostly) a good thing
By Nicholas Polson and James Scott – We teach data science to hundreds of students per year, and they’re all fascinated by artificial intelligence. And they ask great questions.

How does a car learn to drive itself?

How does Alexa understand what I’m saying?

How does Spotify pick such good playlists for me?

How does Facebook recognize my friends in the photos I upload?

These students realize that AI isn’t some sci-fi droid from the future; it’s right here, right now, and it’s changing the world one smartphone at a time. They all want to understand it, and they all want to be a part of it.

And our students aren’t the only ones enthusiastic about AI. They’re joined in their exaltation by the world’s largest companies—from Amazon, Facebook, and Google in America to Baidu, Tencent, and Alibaba in China.

As you may have heard, these big tech firms are waging an expensive global arms race for AI talent, which they judge to be essential to their future.

Yet while this arms race is real, we think there’s a much more powerful trend at work in AI today—a trend of diffusion and dissemination, rather than concentration. Yes, every big tech company is trying to hoard math and coding talent. But at the same time, the underlying technologies and ideas behind AI are spreading with extraordinary speed: to smaller companies, to other parts of the economy, to hobbyists and coders and scientists and researchers everywhere in the world.

That democratizing trend, more than anything else, is what has our students today so excited, as they contemplate a vast range of problems practically begging for good AI solutions. more>

Related>