Tag Archives: Skills

Updates from Siemens

Bearings manufacturer meets stringent accuracy requirements while improving productivity
Siemens – Humankind has been trying to improve the mobility of people and materials by reducing friction between moving parts for centuries. The creators of the pyramids and Stonehenge were able to move massive structures by placing cylindrical wooden rollers beneath great weights to reduce the coefficient of friction and the force required to move them. These world wonders were made possible by some of the earliest known applications of bearings.

Modern bearings with races and balls were first documented in the fifteenth century by Leonardo da Vinci for his helicopter model. Since then, the design, mobility and precision of bearings have developed dramatically in many application domains. In the semiconductor and medical device industries, miniaturization and increasing product complexity have revolutionized motion systems and their components. The precision and accuracy of motion systems are highly dependent on bearings assemblies and how they are integrated into systems. Precisie Metaal Bearings (PM-Bearings) is one of only a few manufacturers in the world that provide high-precision linear bearings.

PM-Bearings specializes in the design and manufacture of high-precision linear bearings, motion systems and positioning stages, and supplies the high-end semiconductor, medical device and machine tool industries. The company was founded in 1966 as a manufacturer of linear bearings, and has expanded to include design, manufacturing and assembly of custommade multi-axis positioning stages with complete mechatronic integration. Located in the Netherlands at Dedemsvaart, the company employs 140 people and supplies customers worldwide.

The company’s products range from very small bearings (10 millimeters in length) up to systems with footprints of 1.2 to 1.5 square meters with stroke lengths of one meter. The portfolio encompasses linear motion components including precision slides, positioning tables and bearings stages. PM-Bearings is part of the PM group, along with other companies specialized in hightech machining. Its global customer base extends from Silicon Valley to Shenzhen.

To maintain a competitive edge, PM-Bearings knew that complete control of the product realization, from design to delivery, was essential. This is why the company chose a comprehensive set of solutions from product lifecycle management (PLM) specialist Siemens PLM Software. These include NX™ software for computer-aided design (CAD), Simcenter™ software for performance prediction, NX CAM for computer-aided manufacturing and Teamcenter® software for PLM to make certain that all stakeholders use the same data and workflows to make the right decisions. more>

Related>

Updates from Chicago Booth

Why banning plastic bags doesn’t work as intended
Benefits of bag regulations are mitigated by changes in consumer behavior
By Rebecca Stropoli – As well-intentioned bans on plastic shopping bags roll out across the United States, there’s an unintended consequence that policy makers should take into account. It turns out that when shoppers stop receiving free bags from supermarkets and other retailers, they make up for it by buying more plastic trash bags, significantly reducing the environmental effectiveness of bag bans by substituting one form of plastic film for another, according to University of Sydney’s Rebecca L. C. Taylor.

Economists call this phenomenon “leakage”—when partial regulation of a product results in increased consumption of unregulated goods, Taylor writes. But her research focusing on the rollout of bag bans across 139 California cities and counties from 2007 to 2015 puts a figure on the leakage and develops an estimate for how much consumers already reuse those flimsy plastic shopping bags.

This is a live issue. After all those localities banned disposable bags, California outlawed them statewide, in 2016. In April 2019, New York became the second US state to impose a broad ban on single-use plastic bags. Since 2007, more than 240 local governments in the US have enacted similar policies.

She finds that the bag bans reduced the use of disposable shopping bags by 40 million pounds a year. But purchases of trash bags increased by almost 12 million pounds annually, offsetting about 29 percent of the benefit, her model demonstrates. Sales of small trash bags jumped 120 percent, of medium bags, 64 percent, and of tall kitchen garbage bags, 6 percent. Moreover, use of paper bags rose by more than 80 million pounds, or 652 million sacks, she finds. more>

Related>

Updates from Siemens

Well control equipment: Metal hat, Fireproof coveralls… CFD
nullBy Gaetan Bouzard – In the Oil & Gas industry, the integration of possible risk linked with well control — such as subsea plume, atmospheric dispersion, fire and explosion — is critical for minimizing impact on the entire system or on operations efficiency, and for ensuring worker health and safety. Risk to system integrity must be prevented at the design phase, but also addressed in case hazards happen along equipment lifetime or system in operation.

Last September 25th, Mr. Alistair E. Gill, from company Wild Well Control demonstrates the value of advanced structural and fluid dynamics mechanics simulation for well controls, emergency response and planning, as part of a Live Webinar organized by Siemens and Society of Petroleum Engineers. In this article I will try to summarize his presentation. To have more insights feel free to watch our On-Demand Webinar.

To be honest when talking about well control for Oil & Gas industry, people usual conception is that some disaster happened and guys wearing protections are trying to light off a big fire. Actually companies such as Wild Well Control are using modern and innovative techniques as Computational Fluid Dynamics (CFD) simulation to support practical team on a well control incident trying to keep asset integrity at the same time.

Mr. Gill provides several examples to demonstrate simulation techniques that were used from

  • Subsea plume and gas dispersion modeling to understand where hydrocarbons go in the event of a blow out
  • Radiant heat modeling in case of a fire
  • Erosion modeling
  • Thermal as well as Structural analysis

There is basically three major categories of simulation used, starting with everything related to the flow within the well bore, looking at kick tolerance, dynamic kill or bull heading; next anything to do with 3D flow using CFD simulation which is the main focus of this article; finally structural analysis using Finite Element modeling. more>

Related>

Updates from Ciena

SD-WAN Gets (More) Real for Service Providers

By Kevin Wade – Interest in Software-Defined Wide Area Networking (SD-WAN), which is designed to offer enterprises cohesive end-to-end visibility and control of their multi-site WAN, continues to grow. Although SD-WAN was originally envisioned to give enterprises a ‘DIY’ approach to WAN management, most industry analysts and experts agree that managed SD-WAN services are the predominant consumption model for enterprises today, and into the foreseeable future.

The trend toward managed SD-WAN services is good news for service providers, many of which were initially cautious that SD-WAN might reduce their revenues and/or weaken their relationships with key business customers. To the contrary, SD-WAN services have emerged as a rapidly growing new source of revenues, as well as one that offers service providers new opportunities to improve the customer experience.

I’ve been following the SD-WAN movement closely since nearly the beginning, and have been pleased to see some recent developments that signal the increasing maturity of SD-WAN services.

Without a doubt, SD-WAN services are becoming more established and accepted. And while Blue Planet isn’t inherently an SD-WAN solution, the deployment of SD-WAN services is one of Blue Planet’s biggest drivers and most common use cases. more>

Related>

Updates from Siemens

Revolutionizing Plant Performance with the Digital Twin and IIoT eBook
By Jim Brown – How can manufacturers use the digital twin and industrial IoT to dramatically improve manufacturing and product performance?

The manufacturing industries are getting more challenging. Manufacturers must evolve as new technologies remove barriers to entry and enable new, digital players to challenge market share. Operational efficiency is no longer enough to compete in today’s era of digitalization and Industry 4.0.

To remain competitive, companies have to maintain high productivity while offering unprecedented levels of flexibility and responsiveness. We believe this is a fundamental disruption that will change the status quo. To survive, manufacturers need to digitalize operations in order to improve speed, agility, quality, costs, customer satisfaction, and the ability to tailor to customer and market needs.

One of the most compelling digitalization opportunities is adopting the digital twin. This approach combines a number of digital technologies to significantly improve quality and productivity. It starts with comprehensive, virtual models of physical assets – products and production lines – to help optimize designs. But the value is much greater because the physical and virtual twins are connected and kept in sync with real data from the Internet of Things (IoT) and Industrial IoT (IIoT).

Further, companies can use analytics to analyze digital twin data to develop deep insights and intelligence that allow for real-time intervention and long-term, continuous improvement.

The digital twin holds significant productivity and quality opportunities for the plant. It can be used to understand when the plant isn’t operating as intended. It can identify or predict equipment issues that can result in unplanned downtime or correct process deviations before they result in quality slippage, scrap, and rework. more>

Related>

Updates from Georgia Tech

He Quieted Deafening Jets
By Ben Brumfield – In 1969, the roar of a passing jet airliner broke a bone in Carolyn Brobrek’s inner ear, as she sat in the living room of her East Boston home. Many flights took off too close to rooftops then, but even at a distance, jet engines were a notorious source of permanent hearing loss.

For decades, Krishan Ahuja tamed jet noise, for which the National Academy of Engineering elected him as a new member this year. Today, Ahuja is an esteemed researcher at the Georgia Institute of Technology, but he got his start more than 50 years ago as an engineering apprentice in Rolls Royce’s aero-engine division, eventually landing in its jet noise research department.

Jet-setters had been a rare elite, but early in Ahuja’s career in the 1970s, air travel went mainstream, connecting the globe. The number of flights multiplied over the years, and jet engine thrust grew stronger, but remarkably, human exposure to passenger jet noise in the same time period plummeted to a fraction of what it had once been, according to the Federal Aviation Administration.

Ahuja not only had a major hand in it, he also has felt the transition himself.

“In those days, if jets went over your house and you were outside, you’d feel like you needed to put your hands over your ears. Not today,” said Ahuja, who is a Regents Researcher at the Georgia Tech Research Institute (GTRI) and Regents Professor in Georgia Tech’s Daniel Guggenheim School of Aerospace Engineering. more>

Related>

>

Updates from Datacenter.com

The Hidden Costs of Hosting Your Infrastructure On-Premise
Datacenter.com – There are many myths around it and the choice between hosting your mission-critical infrastructure in-house or accommodating your IT infrastructure in a professional data center. Managing and implementing your business-critical infrastructure in-house is a huge responsibility on top of your daily work. The specialist requirements with regard to the management of electricity, cooling and security are hard at the helm.

In addition, the excessive overhead on company resources to optimally manage the infrastructure and its environment creates its own set of challenges. The choice should not only be made on the basis of costs. It depends on your business requirements and specific usage options, as well as the costs of the service.

The role of IT has come a long way, with the success of companies now heavily dependent on the ability of the organization to digitally transform. IT and the infrastructure they build are under extreme pressure to perform, with most IT departments facing a tough battle to meet the demand from the operation (business). Digital technologies are drastically changing the way we do business, with business rules changing every day. With companies that continue to reinvent themselves, IT requirements are becoming increasingly difficult to predict now and in the future.

Research shows that nearly 70% of digital transformation projects are not finished within the set deadlines, with only one fifth of customers claiming that their transformation projects are successful in the long term. That said, taking the extra financial and resource costs needed to build and maintain your mission-critical infrastructure makes little sense if your digital journey and your business continue to mature.

As such, moving your business-critical on-premise infrastructure to a specialized colocation data center has quickly become the preferred choice that many organizations consider as part of their search to successfully digitize their business activities. more>

Related>

Eye diagrams: The tool for serial data analysis

By Arthur Pini – The eye diagram is a general-purpose tool for analyzing serial digital signals. It shows the effects of vertical noise, horizontal jitter, duty cycle distortion, inter-symbol interference, and crosstalk, all of which can close the “eye.” While engineers have used eye diagrams for decades, oscilloscopes continually get new features that increase its value.

Oscilloscopes form eye diagrams—the separation between the two binary data states “1” and “0”—by overlaying multiple single clock periods on a persistence display. The accumulation shows the history of multiple acquisitions.

Additive noise tends to close the eye vertically while timing jitter and uncertainty closes the eye horizontally. Duty cycle distortion (DCD) and inter-symbol interference (ISI) change the shape of the eye. The channel will fail if the eye closes to the point where the receiver can no longer recognize “0” and “1” states.

In the days of analog oscilloscopes, the eye diagram was formed by triggering the oscilloscope with the serial data clock and acquiring multiple bits over time using a persistence or storage display. This technique adds the trigger uncertainty or trigger jitter to the eye diagram for each acquisition. Digital oscilloscopes form the eye by acquiring very long record with many serial bits.

The clock period is determined, and the waveform is broken up or “sliced” into multiple single-bit acquisitions overlaid in the persistence display. In this way, all the data is acquired with a single value of trigger jitter that’s eliminated by using differential time measurements within the eye. more>

Updates from Chicago Booth

Who’s at fault for student-loan defaults?
By Howard R. Gold – A central driver of growing income inequality in recent decades has been the earnings premium commanded by those with technical skills, and a widening gap between college graduates and those with a high-school diploma or less.

Workers in the United States have responded by seeking college courses to improve their skills, and many have been drawn to for-profit institutions, which offer two- or four-year degrees or professional certificates in fields such as health administration, culinary arts, and cosmetology. But rather than enjoying an income boost, many graduates of for-profit schools have found themselves struggling to pay back student loans, and defaulting on their debts.

This has particularly affected nontraditional students, according to research by Harvard’s David J. Deming, Claudia Goldin, and Lawrence F. Katz.

Nontraditional students tend to be older than 25 and often they are the first in their families to attend college. They tend to have lower family incomes than typical college students. They are disproportionately women and single parents. They are more likely to be Hispanic or African American.

To be sure, college tuition rose almost 360 percent between 1985 and 2015, and graduates of professional schools, which boast some of the highest tuition rates, tend to owe the most. The median student debt of a new medical-school graduate was $190,000 in 2017, as reported by the Association of American Medical Colleges, while the average debt for graduates of US business schools was $70,000, according to the consumer-finance site SoFi.com, which derived the figure from 60,000 student-loan refinancing applications submitted between January 2014 and September 2016. more>

Related>

Updates from Ciena

Trouble-to-Resolve: Assure Layer 3 Service Performance in Minutes
By Don Jacob – Service provider networks have come a long way from the flat networks of yesteryear. Today, they are highly complex with multiple hierarchies and layers, while running a plethora of services and technologies. Providers use the same underlying network to cater to different applications, ranging from financial applications to streaming video, each with its own unique performance and fault-tolerance requirements.

In this complex scenario, how can service providers assure performance of their Layer 3 services, to verify that services are being delivered and ensure customer satisfaction? Take the case of a service provider who’s providing MPLS services to hundreds of customers. Let us look at how the network engineer managing a service provider network handles a routing issue without a routing analytics solution.

Today, when a customer raises a ticket for a reachability or service delivery problem, the provider manually analyzes the issue, making their trouble-to-resolve process long and time consuming.

To start with, if the customer raises the trouble ticket while a connectivity issue is in progress, the first thing the provider needs to know is the routing path taken by the service. This requires the network engineer finding the source router and then running a traceroute from that router to determine all the hops along the path. Once the routers along the path have been identified, they would then log into each router to understand its performance.

This process is repeated on all routers along the path until the problematic router or link is identified. more>

Related>