Tag Archives: Machine learning

Updates from ITU

Meet your virtual avatar: the future of personalized healthcare
ITU News – Tingly? Sharp? Electric? Dull? Pulsing?

Trying to describe a pain you feel to your doctor can be a difficult task. But soon, you won’t have to: a computer avatar is expected to tell your doctor everything they need to know.

The CompBioMed Centre of Excellence, an international consortium of universities and industries, is developing a program that creates a hyper-personalized avatar or ‘virtual human’ using a supercomputer-generated simulation of an individual’s physical and biomedical information for clinical diagnostics.

There is a rapid and growing need for this kind of technology-enabled healthcare. 12 million people who seek outpatient medical care in the U.S. experience some form of diagnostic error. Additionally, the World Health Organization estimates that there will be a global shortage of 12.9 million healthcare workers by 2035.

Greater access to technology-enabled healthcare will allow doctors to make better and faster diagnoses – and provide the tools to collect the necessary data.

The Virtual Human project combines different kinds of patient data that are routinely generated as part of the current healthcare system, such as x-rays, CAT scans or MRIs to create a personalized virtual avatar. more>

Related>

Updates from Chicago Booth

Machine learning can help money managers time markets, build portfolios, and manage risk
By Michael Maiello – It’s been two decades since IBM’s Deep Blue beat chess champion Garry Kasparov, and computers have become even smarter. Machines can now understand text, recognize voices, classify images, and beat humans in Go, a board game more complicated than chess, and perhaps the most complicated in existence.

And research suggests today’s computers can also predict asset returns with an unprecedented accuracy.

Yale University’s Bryan T. Kelly, Chicago Booth’s Dacheng Xiu, and Booth PhD candidate Shihao Gu investigated 30,000 individual stocks that traded between 1957 and 2016, examining hundreds of possibly predictive signals using several techniques of machine learning, a form of artificial intelligence. They conclude that ML had significant advantages over conventional analysis in this challenging task.

ML uses statistical techniques to give computers abilities that mimic and sometimes exceed human learning. The idea is that computers will be able to build on solutions to previous problems to eventually tackle issues they weren’t explicitly programmed to take on.

“At the broadest level, we find that machine learning offers an improved description of asset price behavior relative to traditional methods,” the researchers write, suggesting that ML could become the engine of effective portfolio management, able to predict asset-price movements better than human managers. more>

Related>

The Future of Machine Learning: Neuromorphic Processors

By Narayan Srinivasa – Machine learning has emerged as the dominant tool for the implementation of complex cognitive tasks resulting in machines that have demonstrated, in some cases, super-human performance. However, these machines require training with a large amount of labeled data and this energy-hungry training process has often been prohibitive in the absence of costly super-computers.

The ways in which animals and humans learn is far more efficient, driven by the evolution of a different processor in the form of a brain that simultaneously optimizes energy of computation with efficient information processing capabilities. The next generation of computers, called neuromorphic processors, will strive to strike this delicate balance between efficiency of computation with the energy needed for this computation.

The foundation for the design of neuromorphic processors is rooted in our understanding of how biological computation is very different from the digital computers of today (Figure).

The brain is composed of noisy analog computing elements including neurons and synapses. Neurons operate as relaxation oscillators. Synapses are implicated in memory formation in the brain and can only resolve between three-to-four bits of information at each synapse. It is well known that the brain operates using a plethora of brain rhythms but without any global clock (i.e., clock free) where the dynamics of these elements operate in an asynchronous fashion. more>

Updates from GE

Power Play: This Software Takes The Guesswork Out Of Energy Demand
By Bruce Watson – Predicting power demand used to be a simple science: People use more power during certain times — like the morning, when they cook breakfast and turn on their lights — and less during others, like when they hit the sack. Relying on predictable sources of electricity — like gas- and coal-fired power plants — utilities were able to balance supply and demand with some fairly straightforward math based on historical records and other data.

But the steady rise of renewable energy made the power landscape infinitely more complicated. On the supply side, changes in wind or cloud cover can sharply shift the amount of power available. Demand has also become harder to nail down as more consumers manage their power use with smart thermostats and appliances like connected ACs.

At the same time, market forces demand better power forecasts. Power plants and fuel are expensive, and they don’t want to operate or buy more equipment than they may need. “In some countries, regulators are asking power generators to guarantee the quality of their forecasts,” says Olivier Cognet, CEO of Swiss-based startup Predictive Layer.

“It’s no longer possible to say ‘We’ll sell you 20 turbines and see what they produce.’ It’s ‘We’ll produce x amount of energy by noon, y amount of energy in two hours and z energy in one month.” more>

The Future of Military Robotics Looks Like a Nature Documentary

By Gregory C. Allen – Every type of animal, whether insect, fish, bird, or mammal, has a suite of sensors (eyes, ears, noses), tools for moving and interacting with its environment (arms, beaks, wings, fins), and a high-speed data processing and decision-making center (brains).

Humans do not yet know how to replicate all the technologies and capabilities of nature, but that these capabilities exist in nature proves they are indeed possible.

Humans do not know what the ultimate technological performance limit for autonomous robotics is. But it can be no lower than the very high level of performance that nature has proven possible with the pigeon, the goose, the monkey, the mouse, or the dolphin.

The United States is far from the only country interested in these capabilities. In 2015, Russian scientists celebrated their development of a robotic “cockroach,” which they said would be an ideal platform for secretly recording conversations and taking photographs. One can easily imagine such a cockroach being outfitted with venom and an injector needle, making it an ideal platform for covert assassination as well. more> https://goo.gl/Wd1Ecv

Updates from GE

I Machine, You Human: How AI Is Helping GE Build A Powerhouse Of Knowledge
By Tomas Kellner – Every fall, GE Global Research holds a scientific gathering called the Whitney Symposium highlighting the latest scientific trends. Last year the two-day event explored industrial applications of artificial intelligence. We sat down with Mark Grabb and Achalesh Pandey, two GE scientists looking for ways to apply AI to jet engines, medical scanners and other machines.

“We are starting to see significant performance increases from the combination of deep learning and reinforcement learning, where you have a human in the loop correcting the system,” Grabb said. “Once you build a smooth user experience and get the system going, people don’t even know they are correcting the AI along the way.”

At GE, we are writing software like Predix, which is the cloud-based operating system for machines that allows us to connect them to the Industrial Internet. But we also have a tremendous number of domain experts. There’s a lot of physics and domain knowledge that’s required to build good analytics and machine learning models. We have actually built AI systems that help data scientists more quickly and more effectively capture the domain knowledge across all the people inside GE building these models. So AI comes in even in the developing of analytics. more> https://goo.gl/OMZ9TS

The body is the missing link for truly intelligent machines

BOOK REVIEW

Basin and Range, Author: John McPhee.
Descartes’ Error, Author: Antonio Damasio.

By Ben Medlock – Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text.

But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet – all of which arose out of a longstanding, embedded encounter with the natural world.

We only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’,

In other words, we think with our whole body, not just with the brain. more> https://goo.gl/oBgkRF

The end of code —

By Edward C. Monaghan – Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning.

In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them.

If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out.

But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box.

The implications of an unparsable machine language aren’t just philosophical. A world run by neurally networked deep-learning machines requires a different workforce.

Analysts have already started worrying about the impact of AI on the job market, as machines render old skills irrelevant. Programmers might soon get a taste of what that feels like themselves.

Danny Hillis [2] has declared the end of the age of Enlightenment, our centuries-long faith in logic, determinism, and control over nature. Hillis says we’re shifting to what he calls the age of Entanglement. more> http://goo.gl/Xmk3ia