Tag Archives: Physics

Universe in a bubble

Maybe we don’t have to speculate about what life is like inside a bubble. It might be the only cosmic reality we know.
By J Richard Gott – The explanation for the accelerating cosmic expansion, surprising as it was at first, was readily available from the theoretical toolbox of physicists. It traced back to an idea from Albert Einstein, called the cosmological constant. Einstein invented it in 1917, as part of a failed attempt to produce a static Universe based on his general theory of relativity. At that time, the data seemed to support such a model.

In 1922, the Russian mathematician Alexander Friedmann showed that relativity in its simplest form, without the cosmological constant, seemed to imply an expanding or contracting Universe. When Hubble’s observations showed conclusively that the Universe was expanding, Einstein abandoned the cosmological constant, but the possibility that it existed never went away.

Then the Belgian physicist Georges Lemaître showed that the cosmological constant could be interpreted in a physical way as the vacuum of empty space possessing a finite energy density accompanied by a negative pressure. That idea might sound rather bizarre at first. We are accustomed, after all, to thinking that the vacuum of empty space should have a zero energy density, since it has no matter in it. But suppose empty space had a finite but small energy density – there’s no inherent reason why such a thing could not be possible.

Negative pressure has a repulsive gravitational effect, but at the same time the energy itself has an attractive gravitational effect, since energy is equivalent to mass. (This is the relationship described by E=mc2, another implication of special relativity.) Operating in three directions – left-right, front-back, and up-down – the negative pressure creates repulsive effects three times as potent as the attractive effects of the vacuum energy, making the overall effect repulsive. We call this vacuum energy dark energy, because it produces no light. Dark energy is the widely accepted explanation for why the expansion rate of the Universe is speeding up.

Distant galaxies will flee from us because of the stretching of space between us and them. After a sufficient number of doublings, the space between them and us will be stretching so fast that their light will no longer be able to cross this ever-widening gap to reach us. Distant galaxies will fade from view and we will find ourselves seemingly alone in the visible Universe. more>

Updates from Georgia Tech

Brilliant Glow of Paint-On Semiconductors Comes from Ornate Quantum Physics
By Ben Brumfield – LED lights and monitors, and quality solar panels were born of a revolution in semiconductors that efficiently convert energy to light or vice versa. Now, next-generation semiconducting materials are on the horizon, and in a new study, researchers have uncovered eccentric physics behind their potential to transform lighting technology and photovoltaics yet again.

Comparing the quantum properties of these emerging so-called hybrid semiconductors with those of their established predecessors is about like comparing the Bolshoi Ballet to jumping jacks. Twirling troupes of quantum particles undulate through the emerging materials, creating, with ease, highly desirable optoelectronic (light-electronic) properties, according to a team of physical chemists led by researchers at the Georgia Institute of Technology.

These same properties are impractical to achieve in established semiconductors.

The particles moving through these new materials also engage the material itself in the quantum action, akin to dancers enticing the floor to dance with them. The researchers were able to measure patterns in the material caused by the dancing and relate them to the emerging material’s quantum properties and to energy introduced into the material.

These insights could help engineers work productively with the new class of semiconductors. more>

Related>

Updates from Georgia Tech

Finally, a Robust Fuel Cell that Runs on Methane at Practical Temperatures
By Ben Brumfield – Fuel cells have not been particularly known for their practicality and affordability, but that may have just changed. There’s a new cell that runs on cheap fuel at temperatures comparable to automobile engines and which slashes materials costs.

Though the cell is in the lab, it has high potential to someday electrically power homes and perhaps cars, say the researchers at the Georgia Institute of Technology who led its development. In a new study in the journal Nature Energy the researchers detailed how they reimagined the entire fuel cell with the help of a newly invented fuel catalyst.

The catalyst has dispensed with high-priced hydrogen fuel by making its own out of cheap, readily available methane. And improvements throughout the cell cooled the seething operating temperatures that are customary in methane fuel cells dramatically, a striking engineering accomplishment. more>

Related>

Updates from Georgia Tech

Looking Back in Time to Watch for a Different Kind of Black Hole
By John Toon – Black holes form when stars die, allowing the matter in them to collapse into an extremely dense object from which not even light can escape. Astronomers theorize that massive black holes could also form at the birth of a galaxy, but so far nobody has been able to look far enough back in time to observe the conditions creating these direct collapse black holes (DCBH).

The James Webb Space Telescope, scheduled for launch in 2021, might be able look far enough back into the early Universe to see a galaxy hosting a nascent massive black hole. Now, a simulation done by researchers at the Georgia Institute of Technology has suggested what astronomers should look for if they search the skies for a DCBH in its early stages.

DCBH formation would be initiated by the collapse of a large cloud of gas during the early formation of a galaxy, said John H. Wise, a professor in Georgia Tech’s School of Physics and the Center for Relativistic Astrophysics. But before astronomers could hope to catch this formation, they would have to know what to look for in the spectra that the telescope could detect, which is principally infrared.

Black holes take about a million years to form, a blip in galactic time. In the DCBH simulation, that first step involves gas collapsing into a supermassive star as much as 100,000 times more massive than our sun. The star then undergoes gravitational instability and collapses into itself to form a massive black hole. Radiation from the black hole then triggers the formation of stars over period of about 500,000 years, the simulation suggested. more>

Related>

Anthropic arrogance

By David P Barash – Welcome to the ‘anthropic principle’, a kind of Goldilocks phenomenon or ‘intelligent design’ for the whole Universe. According to its proponents, the Universe is fine-tuned for human life.

The message is clearly an artificial one and not the result of random noise. Or maybe the Universe itself is alive, and the various physical and mathematical constants are part of its metabolism. Such speculation is great fun, but it’s science fiction, not science.

It should be clear at this point that the anthropic argument readily devolves – or dissolves – into speculative philosophy and even theology. Indeed, it is reminiscent of the ‘God of the gaps’ perspective, in which God is posited whenever science hasn’t (yet) provided an answer.

Calling upon God whenever there is a gap in our scientific understanding may be tempting, but it is not even popular among theologians, because as science grows, the gaps – and thus, God – shrinks. It remains to be seen whether the anthropic principle, in whatever form, succeeds in expanding our sense of ourselves beyond that illuminated by science. I wouldn’t bet on it. more>

Updates from Georgia Tech

Neuroscientists Team with Engineers to Explore how the Brain Controls Movement
By Carol Clark – Scientists have made remarkable advances into recording the electrical activity that the nervous system uses to control complex skills, leading to insights into how the nervous system directs an animal’s behavior.

“We can record the electrical activity of a single neuron, and large groups of neurons, as animals learn and perform skilled behaviors,” says Samuel Sober, an associate professor of biology at Emory University who studies the brain and nervous system. “What’s missing,” he adds, “is the technology to precisely record the electrical signals of the muscles that ultimately control that movement.”

The Sober lab is now developing that technology through a collaboration with the lab of Muhannad Bakir, a professor in Georgia Tech’s School of Electrical and Computer Engineering.

The technology will be used to help understand the neural control of many different skilled behaviors to potentially gain insights into neurological disorders that affect motor control.

“By combining expertise in the life sciences at Emory with the engineering expertise of Georgia Tech, we are able to enter new scientific territory,” Bakir says. “The ultimate goal is to make discoveries that improve the quality of life of people.” more>

Related>

Updates from Georgia Tech

New Cell Manufacturing Research Facility will Change Approaches to Disease Therapies
By John Toon – The vision of making affordable, high-quality cell-based therapies available to hundreds of thousands of patients worldwide moved closer to reality June 6 with the dedication of a new cell manufacturing research facility at Georgia Tech aimed at changing the way we think about medical therapies.

The new Good Manufacturing Practice (GMP) like ISO 8 and ISO 7 compliant facility is part of the existing Marcus Center for Therapeutic Cell Characterization and Manufacturing (MC3M). The center was established in 2016 and made possible by a $15.75 million gift from philanthropist Bernie Marcus, with a $7.25 million investment from Georgia Tech and another $1 million from the Georgia Research Alliance.

MC3M is already helping researchers from Georgia Tech and partner organizations develop ways to provide therapeutic living cells of consistent quality in quantities large enough to meet the growing demands for the cutting-edge treatments. more>

Related>

Updates from Ciena

Coherent optical turns 10: Here’s how it was made
By Bo Gowan – This is the story of how a team of over 100 people in Ciena’s R&D labs pulled together an impressive collection of technology innovations that created a completely new way of transporting data over fiber…and in the processes helped change the direction of the entire optical networking industry.

Back in 2008, many in the industry had serious doubts that commercializing coherent fiber optic transport was even possible, much less the future of optical communications. That left a team of Ciena engineers to defy the naysayers and hold the torch of innovation.

“What we first began to see at Telecom 99 was that we could achieve these high speeds the brute force way, but it was really, really painful,” said Dino DiPerna in an interview.  Dino, along with many in his team, were brought on by Ciena as part of the company’s 2010 acquisition of Nortel’s optical business.  He now serves as Ciena’s Vice President of Packet-Optical Platforms R&D and is based in Ottawa.

By ‘brute force’ Dino is referring to the traditional time-division multiplexing (TDM) method that had been used until then to speed up optical transmission – basically turning the light on and off at increasingly faster speeds (also called the baud or symbol rate). “But once you start pushing past 10 billion times per second, you begin running into significant problems,” said DiPerna.

Those complexities had to do with the underlying boundaries of what you can do with light. The fundamental issue at hand was the natural spread and propagation of light as it travels along the fiber – created by two phenomenon called chromatic dispersion and polarization mode dispersion, or PMD. As you push past 10G speeds, the tolerance to chromatic dispersion goes down with the square of the baud. Due to PMD and noise from optical amplifiers, a 40 Gbaud stream will lose at least 75% of its reach compared to a 10 Gbaud stream.

This reach limitation had two consequences. First, it meant adding more costly regenerators to the network. Second, it meant that the underlying fiber plant required a more expensive, high-quality fiber to operate properly at 40G transmission speeds. more>

Related>

Updates from Georgia Tech

Robot Monitors Chicken Houses and Retrieves Eggs
By John Toon – “Today’s challenge is to teach a robot how to move in environments that have dynamic, unpredictable obstacles, such as chickens,” said Colin Usher, a research scientist in GTRI’s Food Processing Technology Division.

“When busy farmers must spend time in chicken houses, they are losing money and opportunities elsewhere on the farm. In addition, there is a labor shortage when it comes to finding workers to carry out manual tasks such as picking up floor eggs and simply monitoring the flocks. If a robot could successfully operate autonomously in a chicken house 24 hours a day and seven days a week, it could then pick up floor eggs, monitor machinery, and check on birds, among other things. By assigning one robot to each chicken house, we could also greatly reduce the potential for introductions of disease or cross-contamination from one house to other houses.”

The autonomous robot is outfitted with an ultrasonic localization system similar to GPS but more suited to an indoor environment where GPS might not be available. This system uses low-cost, ultrasonic beacons indicating the robot’s orientation and its location in a chicken house. The robot also carries a commercially available time-of-flight camera, which provides three-dimensional (3D) depth data by emitting light signals and then measuring how long they take to return. The localization and 3D data together allow the robot’s software to devise navigation plans around chickens to perform tasks. more>

Related>

Updates from Georgia Tech

Imaging Technique Unlocks the Secrets of 17th Century Artists
By John Toon – The secrets of 17th century artists can now be revealed, thanks to 21st century signal processing. Using modern high-speed scanners and the advanced signal processing techniques, researchers at the Georgia Institute of Technology are peering through layers of pigment to see how painters prepared their canvasses, applied undercoats, and built up layer upon layer of paint to produce their masterpieces.

The images they produce using the terahertz scanners and the processing technique – which was mainly developed for petroleum exploration – provide an unprecedented look at how artists did their work three centuries ago. The level of detail produced by this terahertz reflectometry technique could help art conservators spot previous restorations of paintings, highlight potential damage – and assist in authenticating the old works.

Beyond old art, the nondestructive technique also has potential applications for detecting skin cancer, ensuring proper adhesion of turbine blade coatings and measuring the thickness of automotive paints.

Without the signal processing, researchers might only be able to identify layers 100 to 150 microns thick. But using the advanced processing, they can distinguish layers just 20 microns thick. Paintings done before the 18th century have been challenging to study because their paint layers tend to be thin, Citrin said. Individual pigments cannot be resolved by the technique, though the researchers hope to be able to obtain that information in the future. more>

Related>