What AI Will do to Job Availability

Image Credit: Mises

The Fear of Mass Unemployment Due to Artificial Intelligence and Robotics Is Unfounded

People are arguing over whether artificial intelligence (AI) and robotics will eliminate human employment. People seem to have an all-or-nothing belief that either the use of technology in the workplace will destroy human employment and purpose or it won’t affect it at all. The replacement of human jobs with robotics and AI is known as “technological unemployment.”

Although robotics can turn materials into economic goods in a fraction of the time it would take a human, in some cases using minimal human energy, some claim that AI and robotics will actually bring about increasing human employment. According to a 2020 Forbes projection, AI and robotics will be a strong creator of jobs and work for people across the globe in the near future. However, also in 2020, Daron Acemoglu and Pascual Restrepo published a study that projected negative job growth when AI and robotics replace human jobs, predicting significant job loss each time a robot replaces a human in the workplace. But two years later, an article in The Economist showed that many economists have backtracked on their projection of a high unemployment rate due to AI and robotics in the workplace. According to the 2022 Economist article, “Fears of a prolonged period of high unemployment did not come to pass. . . . The gloomy narrative, which says that an invasion of job-killing robots is just around the corner, has for decades had an extraordinary hold on the popular imagination.” So which scenario is correct?

Contrary to popular belief, no industrialized nation has ever completely replaced human energy with technology in the workplace. For instance, the steam shovel never put construction workers out of work; whether people want to work in construction is a different question. And bicycles did not become obsolete because of vehicle manufacturing: “Consumer spending on bicycles and accessories peaked at $8.3 billion in 2021,” according to an article from the World Economic Forum.

Do people generally think AI and robotics can run an economy without human involvement, energy, ingenuity, and cooperation? While AI and robotics have boosted economies, they cannot plan or run an economy or create technological unemployment worldwide. “Some countries are in better shape to join the AI competition than others,” according to the Carnegie Endowment for International Peace. Although an accurate statement, it misses the fact that productive economies adapt to technological changes better than nonproductive economies. Put another way, productive people are even more effective when they use technology. Firms using AI and robotics can lower production costs, lower prices, and stimulate demand; hence, employment grows if demand and therefore production increase. In the unlikely event that AI or robotic productive technology does not lower a firm’s prices and production costs, employment opportunities will decline in that industry, but employment will shift elsewhere, potentially expanding another industry’s capacity. This industry may then increase its use of AI and robotics, creating more employment opportunities there.

In the not-so-distant past, office administrators did not know how to use computers, but when the computer entered the workplace, it did not eliminate administrative employment as was initially predicted. Now here we are, walking around with minicomputers in our pants pockets. The introduction of the desktop computer did not eliminate human administrative workers—on the contrary, the computer has provided more employment since its introduction in the workplace. Employees and business owners, sometimes separated by time and space, use all sorts of technological devices, communicate with one another across vast networks, and can be increasingly productive.

I remember attending a retirement party held by a company where I worked decades ago. The retiring employee told us all a story about when the company brought in its first computer back in the late ’60s. The retiree recalled, “The boss said we were going to use computers instead of typewriters and paper to handle administrative tasks. The next day, her department went from a staff of thirty to a staff of five.” The day after the department installed computers, twenty-five people left the company to seek jobs elsewhere so they would not “have to learn and deal with them darn computers.”

People often become afraid of losing their jobs when firms introduce new technology, particularly technology that is able to replicate human tasks. However, mass unemployment due to technological innovation has never happened in any industrialized nation. The notion that AI will disemploy humans in the marketplace is unfounded. Mike Thomas noted in his article “Robots and AI Taking Over Jobs: What to Know about the Future of Jobs” that “artificial intelligence is poised to eliminate millions of current jobs—and create millions of new ones.” The social angst about the future of AI and robotics is reminiscent of the early nineteenth-century Luddites of England and their fear of replacement technology. Luddites, heavily employed in the textile industry, feared the weaving machine would take their jobs. They traveled throughout England breaking and vandalizing machines and new manufacturing technology because of their fear of technological unemployment. However, as the textile industry there became capitalized, employment in that industry actually grew. History tells us that technology drives the increase of work and jobs for humans, not the opposite.

We should look forward to unskilled and semiskilled workers’ upgrading from monotonous work because of AI and robotics. Of course, AI and robotics will have varying effects on different sectors; but as a whole, they are enablers and amplifiers of human work. As noted, the steam shovel did not disemploy construction workers. The taxi industry was not eliminated because of Uber’s technology; if anything, Uber’s new AI technology lowered the barriers of entry to the taxi industry. Musicians were not eliminated when music was digitized; instead, this innovation gave musicians larger platforms and audiences, allowing them to reach millions of people with the swipe of a screen. And dating apps running on AI have helped millions of people fall in love and live happily ever after.

About the Author

Raushan Gross is an Associate Professor of Business Management at Pfeiffer University. His works include Basic EntrepreneurshipManagement and Strategy, and the e-book The Inspiring Life and Beneficial Impact of Entrepreneurs.

Guess the Odds that the NCAA Games Will Attract More Gambling in 2023

Image Credit: Fictures (Flickr)

As March Madness Looms, Growth in Legalized Sports Betting May Pose a Threat to College Athletes

March Madness began on March 14, 2023, it’s a sure bet that millions of Americans will be making wagers on the annual college basketball tournament.

The American Gaming Association estimates that in 2022, 45 million people – or more than 17% of American adults – planned to wager US$3.1 billion on the NCAA tournament. That makes it one of the nation’s most popular sports betting events, alongside contests such as the Kentucky Derby and the Super Bowl. By at least one estimate, March Madness is the most popular betting target of all.

While people have been betting on March Madness for years, one difference now is that betting on college sports is legal in many states. This is largely due to a 2018 Supreme Court ruling that cleared the way for each state to decide whether to permit people to gamble on sporting events. Prior to the ruling, legal sports betting was only allowed in Nevada.

Since the ruling, sports betting has grown dramatically. Currently, 36 states allow some form of legalized sports betting. And now, Georgia, Maine and Kentucky are proposing legislation to make sports betting legal.

About two weeks after sports betting became legal in Ohio on Jan. 1, 2023, someone, disappointed by an unexpected loss of the University of Dayton men’s basketball team to Virginia Commonwealth University, made threats and left disparaging messages against Dayton athletes and the coaching staff.

The Ohio case is by no means isolated. In 2019, a Babson College student who was a “prolific sports gambler” was sentenced to 18 months in prison for sending death threats to at least 45 professional and collegiate athletes in 2017.

Faculty members of Miami University’s Institute for Responsible Gaming, Lottery, and Sports are concerned that the increasing prevalence of sports betting could potentially lead to more such incidents, putting more athletes in danger of threats from disgruntled gamblers who blame them for their gambling losses.

The anticipated growth in sports gambling is quite sizable. Analysts estimate the market in the U.S. may reach over US$167 billion by 2029.

Gambling Makes Inroads into Colleges

Concerns over college athletes being targeted by upset gamblers are not new. Players and sports organizations have expressed worry that expanded gambling could lead to harassment and compromise their safety. Such concerns led the nation’s major sports organizations – MLB, NBA, NFL, NHL and NCAA – to sue New Jersey in 2012 over a plan to initiate legal sports betting in that state. They argued that sports betting would make the public think that games were being thrown. Ultimately, the Supreme Court ruled that it was up to states to decide if they wanted to permit legal gambling.

Sports betting has also made inroads into America’s college campuses. Some universities, such as Louisiana State University and Michigan State University, have signed multimillion-dollar deals with casinos or gaming companies to promote gambling on campus.

Athletic conferences are also cashing in on the data related to these games and events. For instance, the Mid-Atlantic Conference signed a lucrative five-year deal in 2022 to provide real-time statistical event data to gambling companies, which then leverage the data to create real-time wager opportunities during sporting events.

As sports betting comes to colleges and universities, it means the schools will inevitably have to deal with some of the negative aspects of gambling. This potentially includes more than just gambling addiction. It could also involve the potential for student-athletes and coaches to become targets of threats, intimidation or bribes to influence the outcome of events.

The risk for addiction on campus is real. According to the National Council on Problem Gambling, over 2 million adults in the U.S. have a “serious” gambling problem, and another 4 million to 6 million may have mild to moderate problems. One report estimates that 6% of college students have a serious gambling problem.

What Can be Done?

Two faculty fellows at Miami University’s Institute for Responsible Gaming, Lottery, and Sport – former Ohio State Senator William Coley and Sharon Custer – recommend that regulators and policymakers work with colleges and universities to reduce the potential harm from the growth in legal gaming. Specifically, they recommend that each state regulatory authority:

  • Develop plans to coordinate between different governmental agencies to ensure that individuals found guilty of violations are sanctioned in other jurisdictions.
  • Dedicate some of the revenue from gaming to develop educational materials and support services for athletes and those around them.
  • Create anonymous tip lines to report threats, intimidation or influence, and fund an independent entity to respond to these reports.
  • Assess and protect athlete privacy. For instance, schools might decline to publish contact information for student-athletes and coaches in public directories.
  • Train athletes and those around them on basic privacy management. For instance, schools might advise athletes to not post on public social media outlets, especially if the post gives away their physical location.

The NCAA or athletic conferences could lead the development of resources, policies and sanctions that serve to educate, protect and support student-athletes and others around them who work at the schools for which they play. This will require significant investment to be comprehensive and effective.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Jason W. Osborne, Professor of Statistics, Institute for Responsible Gaming, Lottery, and Sport, Miami University.

Will the Fed Now Exercise Caution?

Image Credit: Adam Selwood (Flickr)

FOMC Now Contending With Banks and Sticky Inflation

The Federal Reserve is facing a rather sticky problem. Despite its best efforts over the past year, inflation is stubbornly refusing to head south with any urgency to a target of 2%.

Rather, the inflation report released on March 14, 2023, shows consumer prices rose 0.4% in February, meaning the year-over-year increase is now at 6% – which is only a little lower than in January.

So, what do you do if you are a member of the rate-setting Federal Open Market Committee meeting March 21-22 to set the U.S. economy’s interest rates?

The inclination based on the Consumer Price Index data alone may be to go for broke and aggressively raise rates in a bid to tame the inflationary beast. But while the inflation report may be the last major data release before the rate-setting meeting, it is far from being the only information that central bankers will be chewing over.

Don’t let yourself be misled. Understand issues with help from experts

And economic news from elsewhere – along with jitters from a market already rather spooked by two recent bank failures – may steady the Fed’s hand. In short, monetary policymakers may opt to go with what the market has already seemingly factored in: an increase of 0.25-0.5 percentage point.

Here’s why.

While it is true that inflation is proving remarkably stubborn – and a robust March job report may have put further pressure on the Fed – digging into the latest CPI data shows some signs that inflation is beginning to wane.

Energy prices fell 0.6% in February, after increasing 0.2% the month before. This is a good indication that fuel prices are not out of control despite the twin pressures of extreme weather in the U.S. and the ongoing war in Ukraine. Food prices in February continued to climb, by 0.4% – but here, again, there were glimmers of good news in that meat, fish and egg prices had softened.

Although the latest consumer price report isn’t entirely what the Fed would have wanted to read – it does underline just how difficult the battle against inflation is – there doesn’t appear to be enough in it to warrant an aggressive hike in rates. Certainly it might be seen as risky to move to a benchmark higher than what the market has already factored in. So, I think a quarter point increase is the most likely scenario when Fed rate-setters meet later this month – but certainly no more than a half point hike at most.

This is especially true given that there are signs that the U.S. economy is softening. The latest Bureau of Labor Statistics’ Job Openings and Labor Turnover survey indicates that fewer businesses are looking as aggressively for labor as they once were. In addition, there have been some major rounds of layoffs in the tech sector. Housing has also slowed amid rising mortgage rates and falling prices. And then there was the collapse of Silicon Valley Bank and Signature Bank – caused in part by the Fed’s repeated hikes in its base rate.

This all points to “caution” being the watchword when it comes to the next interest rate decision. The market has priced in a moderate increase in the Fed’s benchmark rate; anything too aggressive has the potential to come as a shock and send stock markets tumbling.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Christopher Decker, Professor of Economics, University of Nebraska Omaha.

How Easy Money Killed Silicon Valley Bank

Image Credit: Federal Reserve

SVB Invested in the Entire Bubble of Everything Says, Renowned Economist

“SVB invested in the entire bubble of everything,” writes Daniel Lacalle, PhD, economist, fund manager,and once ranked as one of the top twenty most influential economists in the world (2016 and 2017).  He explains in his article below the pathway the Silicon Valley bank took and “bets,” which it lost, that led to the bank’s quick demise. “Aaaaand it’s gone,” Lacalle says, borrowing a line from a South Park episode that originally aired in March 2009.Paul Hoffman, Managing Editor, Channelchek

The second-largest collapse of a bank in recent history after Lehman Brothers could have been prevented. Now the impact is too large, and the contagion risk is difficult to measure.

The demise of the Silicon Valley Bank (SVB) is a classic bank run driven by a liquidity event, but the important lesson for everyone is that the enormity of the unrealized losses and the financial hole in the bank’s accounts would not have existed if not for ultra-loose monetary policy. Let me explain why.

As of December 31, 2022, Silicon Valley Bank had approximately $209.0 billion in total assets and about $175.4 billion in total deposits, according to their public accounts. Their top shareholders are Vanguard Group (11.3 percent), BlackRock (8.1 percent), State Street (5.2 percent) and the Swedish pension fund Alecta (4.5 percent).

The incredible growth and success of SVB could not have happened without negative rates, ultra-loose monetary policy, and the tech bubble that burst in 2022. Furthermore, the bank’s liquidity event could not have happened without the regulatory and monetary policy incentives to accumulate sovereign debt and mortgage-backed securities (MBS).

SVB’s asset base read like the clearest example of the old mantra “Don’t fight the Fed.” SVB made one big mistake: follow exactly the incentives created by loose monetary policy and regulation.

What happened in 2021? Massive success that, unfortunately, was also the first step to demise. The bank’s deposits nearly doubled with the tech boom. Everyone wanted a piece of the unstoppable new tech paradigm. SVB’s assets also rose and almost doubled.

The bank’s assets rose in value. More than 40 percent were long-dated Treasurys and MBS. The rest were seemingly world-conquering new tech and venture capital investments.

Most of those “low risk” bonds and securities were held to maturity. SVB was following the mainstream rulebook: low-risk assets to balance the risk in venture capital investments. When the Federal Reserve raised interest rates, SVB must have been shocked.

Its entire asset base was a single bet: low rates and quantitative easing for longer. Tech valuations soared in the period of loose monetary policy, and the best way to “hedge” that risk was with Treasurys and MBS. Why bet on anything else? This is what the Fed was buying in billions every month. These were the lowest-risk assets according to all regulations, and, according to the Fed and all mainstream economists, inflation was purely “transitory,” a base-effect anecdote. What could go wrong?

Inflation was not transitory, and easy money was not endless.

Rate hikes happened. And they caught the bank suffering massive losses everywhere. Goodbye, bonds and MBS prices. Goodbye, “new paradigm” tech valuations. And hello, panic. A good old bank run, despite the strong recovery of SVB shares in January. Mark-to-market unrealized losses of $15 billion were almost 100 percent of the bank’s market capitalization. Wipeout.

As the bank manager said in the famous South Park episode: “Aaaaand it’s gone.” SVB showed how quickly the capital of a bank can dissolve in front of our eyes.

The Federal Deposit Insurance Corporation (FDIC) will step in, but that is not enough because only 3 percent of SVB deposits were under $250,000. According to Time magazine, more than 85 percent of Silicon Valley Bank’s deposits were not insured.

It gets worse. One-third of US deposits are in small banks, and around half are uninsured, according to Bloomberg. Depositors at SVB will likely lose most of their money, and this will also create significant uncertainty in other entities.

SVB was the poster boy of banking management by the book. They followed a conservative policy of acquiring the safest assets—long-dated Treasury bills—as deposits soared.

SVB did exactly what those that blamed the 2008 crisis on “deregulation” recommended. SVB was a boring, conservative bank that invested its rising deposits in sovereign bonds and mortgage-backed securities, believing that inflation was transitory, as everyone except us, the crazy minority, repeated.

SVB did nothing but follow regulation, monetary policy incentives, and Keynesian economists’ recommendations point by point. SVB was the epitome of mainstream economic thinking. And mainstream killed the tech star.

Many will now blame greed, capitalism, and lack of regulation, but guess what? More regulation would have done nothing because regulation and policy incentivize buying these “low risk” assets. Furthermore, regulation and monetary policy are directly responsible for the tech bubble. The increasingly elevated valuations of unprofitable tech and the allegedly unstoppable flow of capital to fund innovation and green investments would never have happened without negative real rates and massive liquidity injections. In the case of SVB, its phenomenal growth in 2021 was a direct consequence of the insane monetary policy implemented in 2020, when the major central banks increased their balance sheet to $20 trillion as if nothing would happen.

SVB is a casualty of the narrative that money printing does not cause inflation and can continue forever. They embraced it wholeheartedly, and now they are gone.

SVB invested in the entire bubble of everything: Sovereign bonds, MBS, and tech. Did they do it because they were stupid or reckless? No. They did it because they perceived that there was very little to no risk in those assets. No bank accumulates risk in an asset it believes is high risk. The only way in which banks accumulate risk is if they perceive that there is none. Why do they perceive no risk? Because the government, regulators, central banks, and the experts tell them there is none. Who will be next?

Many will blame everything except the perverse incentives and bubbles created by monetary policy and regulation, and they will demand rate cuts and quantitative easing to solve the problem. It will only worsen. You do not solve the consequences of a bubble with more bubbles.

The demise of Silicon Valley Bank highlights the enormity of the problem of risk accumulation by political design. SVB did not collapse due to reckless management, but because they did exactly what Keynesians and monetary interventionists wanted them to do. Congratulations.

About the Author:

Daniel Lacalle, PhD, economist and fund manager, is the author of the bestselling books Freedom or Equality (2020), Escape from the Central Bank Trap (2017), among others.

Lacalle was ranked as one of the top twenty most influential economists in the world in 2016 and 2017 by Richtopia. He holds the CIIA financial analyst title, with a postgraduate degree in higher business studies and a master’s degree in economic investigation.

CBD – What We Know, What We Don’t, and What We Will

Image Credit: Elsa Olofsson (Flickr)

Here’s What Science Now Says about CBD’s Health Benefits

Over the last five years, an often forgotten piece of U.S. federal legislation – the Agriculture Improvement Act of 2018, also known as the 2018 Farm Bill – has ushered in an explosion of interest in the medical potential of cannabis-derived cannabidiol, or CBD.

After decades of debate, the bill made it legal for farmers to grow industrial hemp, a plant rich in CBD. Hemp itself has tremendous value as a cash crop; it’s used to produce biofuel, textiles and animal feed. But the CBD extracted from the hemp plant also has numerous medicinal properties, with the potential to benefit millions through the treatment of seizure disorders, pain or anxiety.

Prior to the bill’s passage, the resistance to legalizing hemp was due to its association with marijuana, its biological cousin. Though hemp and marijuana belong to the same species of plant, Cannabis sativa, they each have a unique chemistry, with very different characteristics and effects. Marijuana possesses tetrahydrocannabinol, or THC, the chemical that produces the characteristic high that is associated with cannabis. Hemp, on the other hand, is a strain of the cannabis plant that contains virtually no THC, and neither it nor the CBD derived from it can produce a high sensation.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Kent E Vrana, Professor and Chair of Pharmacology, Penn State.

As a professor and chair of the department of pharmacology at Penn State, I have been following research developments with CBD closely and have seen some promising evidence for its role in treating a broad range of medical conditions.

While there is growing evidence that CBD can help with certain conditions, caution is needed. Rigorous scientific studies are limited, so it is important that the marketing of CBD products does not get out ahead of the research and of robust evidence.

Unpacking the Hype Behind CBD

The primary concern about CBD marketing is that the scientific community is not sure of the best form of CBD to use. CBD can be produced as either a pure compound or a complex mixture of molecules from hemp that constitute CBD oil. CBD can also be formulated as a topical cream or lotion, or as a gummy, capsule or tincture.

Guidance, backed by clinical research, is needed on the best dose and delivery form of CBD for each medical condition. That research is still in progress.

But in the meantime, the siren’s call of the marketplace has sounded and created an environment in which CBD is often hyped as a cure-all – an elixir for insomnia, anxiety, neuropathic pain, cancer and heart disease.

Sadly, there is precious little rigorous scientific evidence to support many of these claims, and much of the existing research has been performed in animal models.

CBD is simply not a panacea for all that ails you.

Childhood Seizure Disorders

Here’s one thing that is known: Based on rigorous trials with hundreds of patients, CBD has been shown to be a proven safe and effective drug for seizure disorders, particularly in children.

In 2018, the U.S. Food and Drug Administration granted regulatory approval for the use of a purified CBD product sold under the brand name Epidiolex for the treatment of Lennox-Gastaut and Dravet syndromes in children.

These two rare syndromes, appearing early in life, produce large numbers of frequent seizures that are resistant to traditional epilepsy treatments. CBD delivered as an oral solution as Epidiolex, however, can produce a significant reduction – greater than 25% – in the frequency of seizures in these children, with 5% of the patients becoming seizure-free.

More than 200 Scientific Trials

CBD is what pharmacologists call a promiscuous drug. That means it could be effective for treating a number of medical conditions. In broad strokes, CBD affects more than one process in the body – a term called polypharmacology – and so could benefit more than one medical condition.

As of early 2023, there are 202 ongoing or completed scientific trials examining the effectiveness of CBD in humans on such diverse disorders as chronic pain, substance use disorders, anxiety and arthritis.

In particular, CBD appears to be an anti-inflammatory agent and analgesic, similar to the functions of aspirin. This means it might be helpful for treating people suffering with inflammatory pain, like arthritis, or headaches and body aches.

CBD also holds potential for use in cancer therapy, although it has not been approved by the FDA for this purpose.

The potential for CBD in the context of cancer is twofold:

First, there is evidence that it can directly kill cancer cells, enhancing the ability of traditional therapies to treat the disease. This is not to say that CBD will replace those traditional therapies; the data is not that compelling.

Second, because of its ability to reduce pain and perhaps anxiety, the addition of CBD to a treatment plan may reduce side effects and increase the quality of life for people with cancer.

The Risks of Unregulated CBD

While prescription CBD is safe when used as directed, other forms of the molecule come with risks. This is especially true for CBD oils. The over-the-counter CBD oil industry is unregulated and not necessarily safe, in that there are no regulatory requirements for monitoring what is in a product.

What’s more, rigorous science does not support the unsubstantiated marketing claims made by many CBD products.

In a 2018 commentary, the author describes the results of his own study, which was published in Dutch (in 2017). His team obtained samples of CBD products from patients and analyzed their content. Virtually none of the 21 samples contained the advertised quantity of CBD; indeed, 13 had little to no CBD at all and many contained significant levels of THC, the compound in marijuana that leads to a high – and that was not supposed to have been present.

In fact, studies have shown that there is little control of the contaminants that may be present in over-the-counter products. The FDA has issued scores of warning letters to companies that market unapproved drugs containing CBD. In spite of the marketing of CBD oils as all-natural, plant-derived products, consumers should be aware of the risks of unknown compounds in their products or unintended interactions with their prescription drugs.

Regulatory guidelines for CBD are sorely lacking. Most recently, in January 2023, the FDA concluded that the existing framework is “not appropriate for CBD” and said it would work with Congress to chart a way forward. In a statement, the agency said that “a new regulatory pathway for CBD is needed that balances individuals’ desire for access to CBD products with the regulatory oversight needed to manage risks.”

As a natural product, CBD is still acting as a drug – much like aspirin, acetaminophen or even a cancer chemotherapy. Health care providers simply need to better understand the risks or benefits.

CBD may interact with the body in ways that are unintended. CBD is eliminated from the body by the same liver enzymes that remove a variety of drugs such as blood thinners, antidepressants and organ transplant drugs. Adding CBD oil to your medication list without consulting a physician could be risky and could interfere with prescription medications.

In an effort to help prevent these unwanted interactions, my colleague Dr. Paul Kocis, a clinical pharmacist, and I have created a free online application called the CANNabinoid Drug Interaction Resource. It identifies how CBD could potentially interact with other prescription medications. And we urge all people to disclose both over-the-counter CBD or recreational or medical marijuana use to their health care providers to prevent undesirable drug interactions.

In the end, I believe that CBD will prove to have a place in people’s medicine cabinets – but not until the medical community has established the right form to take and the right dosage for a given medical condition.

The Original Driver of Inflation has Sailed

Image Credit: Cycling Man (Flickr)

The Supply Chain Part of Inflation Can be Declared Dead, Now What?

New data shows the supply chain is no longer putting meaningful pressure on inflation — will rising prices finally sail off and stay there?  

Historically, the Global Supply Chain Pressure Index (GSCPI) is now on the low side. In fact, for the monthly period ending February 28, it’s below its 25-year average. What’s more, is this is the first time the GSCPI has released a below-average reading of supply chain pressure since August of 2019.

Data Source: NY Federal Reserve

This is significant as the supply-chain issues related to the pandemic, would seem to be transitory and are now no longer the issue. From March 2020 until this more recent report, consumers with easier money available, including stimulus checks, drove demand higher for goods. The suddenness of the onslaught of demand for goods caught the modern world’s “just-in-time” inventory management systems off guard. To make that situation much worse, lockdown policies slowed global production, and shipping and transport became entrenched in gridlock due to undermanned loading docks all under some level of new pandemic processes designed for health and  safety.

Inflation climbed as the price of shipping was bid up substantially, and shortages of products on shelves caused retailers to lessen demand by hiking prices. Some products, particularly new and used cars, experienced sharp price increases as supply chain-related shortages on automotive components such as computer chips and other parts became difficult to obtain.

Will Inflation Finally Recede?

An 18-month-long period of rampant inflation in goods, including vehicles, electronics, food, and sporting goods, (including bicycles for both indoor and outdoor use became unavailable) began to decompress starting in early 2022. The supply chains had slowly worked through the main causes.

Around this same period in 2022, inflation pressures began to build in services. As price hikes for goods lessened or backtracked, the cost for services, including wages, shot up. This is still fueling inflation today.

Often, the fear or expectation of rising prices drives inflation and vice versa. This may be the reason Fed Chairman Powell used the description “transitory” long past the period that it was obvious that inflation was likely persistent. If the Chair of the US Central Bank had suggested back then that we had a long-term problem, the worst of it may have arrived faster and been worse. Conversely, now that higher-than-target inflation is here, it makes sense for Powell to speak more hawkishly, this helps alter expectations of ongoing high rates of inflation.

With inflation primarily coming from services, the medicine for reducing the demand for human services is lessen demand, or even more difficult, increase the labor force. This is a bitter pill for the economy and creates an issue with the Federal Reserve which has two mandates, one to keep inflation modest and the other to maximize employment.

Take Away

The GSCPI is an indicator that the goods-based part of the economy has normalized. Inflation is still raging in services, which are barely tied to services. The hope is that the Fed can reduce the demand for higher and higher wages or perhaps bring more capable workers into the workforce. Another part of this plan may have nothing to do with tightening credit conditions. Talking publicly about being resolved to squash inflation also has an impact on expectations which will reduce the prices charged for service.

The initial battle, the one that kicked off the price hikes (supply chain), has ended, now we have to see how the rest of the Fed’s fight against inflation, both in policy and psychologically, plays out.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.house.mi.gov/hfa/PDF/RevenueForecast/NewYorkFed_Global_Supply_Chain_Pressure_Index_Jan2023.pdf

https://www.newyorkfed.org/research/policy/gscpi#/overview

Big Tech Trying to Act More Like Nimble Smaller Companies

Image Credit: Book Catalog (Flickr)

Why Meta’s Embrace of a ‘Flat’ Management Structure May Not Lead to the Innovation and Efficiency Mark Zuckerberg Seeks

Big Tech, under pressure from dwindling profits and falling stock prices, is seeking some of that old startup magic.

Meta, the parent of Facebook, recently became the latest of the industry’s dominant players to lay off thousands of employees, particularly middle managers, in an effort to return to a flatter, more nimble organization – a structure more typical when a company is very young or very small.

Meta CEO Mark Zuckerberg joins Elon Musk and other business leaders in betting that eliminating layers of management will boost profits. But is flatter better? Will getting rid of managers improve organizational efficiency and the bottom line?

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts ofAmber Stephenson, Associate Professor of Management and Director of Healthcare Management Programs, Clarkson University.

As someone who has studied and taught organization theory as well as leadership and organizational behavior for nearly a decade, I think it’s not that simple.

Resilient Bureaucracies

Since the 1800s, management scholars have sought to understand how organizational structure influences productivity. Most early scholars focused on bureaucratic models that promised managerial authority, rational decision-making and efficiency, impartiality and fairness toward employees.

These centralized bureaucratic structures still reign supreme today. Most of us have likely worked in such organizations, with a boss at the top and clearly defined layers of management below. Rigid, written rules and policies dictate how work is done.

Research shows that some hierarchy correlates with commercial success – even in startups – because adding just one level of management helps prevent directionless exploration of ideas and damaging conflicts among staff. Bureaucracies, in their pure form, are viewed as the most efficient way to organize complex companies; they are reliable and predictable.

While adept at solving routine problems, such as coordinating work and executing plans, hierarchies do less well adapting to rapid changes, such as increased competition, shifting consumer tastes or new government regulations.

Bureaucratic hierarchies can stifle the development of employees and limit entrepreneurial initiative. They are slow and inept at tackling complex problems beyond the routine.

Moreover, they are thought to be very costly. Management scholars Gary Hamel and Michele Zanini estimated in 2016 that waste, rigidity and resistance to change in bureaucratic structures cost the U.S. economy US$3 trillion in lost output a year. That is the equivalent of about 17% of all goods and services produced by the U.S. economy at the time of the study.

Even with the mounting criticisms, bureaucratic structures have shown resilience over time. “The formal managerial hierarchy in modern organizations is as persistent as are calls for its replacement,” Harvard scholars Michael Lee and Amy Edmondson wrote in 2017.

Fascinatingly Flat

Flat structures, on the other hand, aim to decentralize authority by reducing or eliminating hierarchy. The structure is harnessed to flexibility and agility rather than efficiency, which is why flat organizations adapt better to dynamic and changing environments.

Flat structures vary. Online retailer Zappos, for example, adopted one of the most extreme versions of the flat structure – known as holacracy – when it eliminated all managers in 2014. Computer game company Valve has a president but no formal managerial structure, leaving employees free to work on projects they choose.

Other companies, such as Gore Tex maker W. L. Gore & Associates and film-streaming service Netflix, have instituted structures that empower employees with wide-reaching autonomy but still allow for some degree of management.

In general, flat structures rely on constant communication, decentralized decision-making and the self-motivation of employees. As a result, flat structures are associated with innovation, creativity, speed, resilience and improved employee morale.

The promises of going flat are understandably enticing, but flat organizations are tricky to get right.

The list of companies succeeding with flat structures is noticeably short. Besides the companies mentioned above, the list typically includes social media marketing organization Buffer, online publisher Medium and tomato processing and packing company Morning Star Tomatoes.

Other organizations that attempted flatter structures have encountered conflicts between staff, ambiguity around job roles and the emergence of unofficial hierarchies – which undermines the whole point of going flat. They eventually reverted back to hierarchical structures.

“While people may lament the proliferation of red tape,” management scholars Pedro Monteiro and Paul Adler explain, “in the next breath, many complain that ‘there ought to be a rule.’”

Even Zappos, often cited as the case study for flat organizations, has slowly added back managers in recent years.

Right Tool

In many ways, flat organizations require even stronger management than hierarchical ones.

When managers are removed, the span of control for those remaining increases. Corporate leaders must delegate – and track – tasks across greater numbers of employees and constantly communicate with workers.

Careful planning is needed to determine how work is organized, information shared, conflicts resolved and employees compensated, hired and reviewed. It is not surprising that as companies grow, the complexity of bigger organizations poses barriers to flat models.

In the end, organizational structure is a tool. History shows that business and economic conditions determine which type of structure works for an organization at any given time.

All organizations navigate the trade-off between stability and flexibility. While a hospital system facing extensive regulations and patient safety protocols may require a stable and consistent hierarchy, an online game developer in a competitive environment may need an organizational structure that’s more nimble so it can adapt to changes quickly.

Business and economic conditions are changing for Big Tech, as digital advertising declines, new competitors surface and emerging technologies demand risky investments. Meta’s corporate flattening is one response.

As Zuckerberg noted when explaining recent changes, “Our management theme for 2023 is the ‘Year of Efficiency,’ and we’re focused on becoming a stronger and more nimble organization.”

But context matters. So does planning. All the evidence I’ve seen indicates that embracing flatness by cutting middle management will not, by itself, do much to make a company more efficient.

AI Design Simplifies Complicated Structural Engineering

Image Credit: Autodesk

Integrating Humans with AI in Structural Design

David L. Chandler | MIT News Office

Modern fabrication tools such as 3D printers can make structural materials in shapes that would have been difficult or impossible using conventional tools. Meanwhile, new generative design systems can take great advantage of this flexibility to create innovative designs for parts of a new building, car, or virtually any other device.

But such “black box” automated systems often fall short of producing designs that are fully optimized for their purpose, such as providing the greatest strength in proportion to weight or minimizing the amount of material needed to support a given load. Fully manual design, on the other hand, is time-consuming and labor-intensive.

Now, researchers (MIT) have found a way to achieve some of the best of both of these approaches. They used an automated design system but stopped the process periodically to allow human engineers to evaluate the work in progress and make tweaks or adjustments before letting the computer resume its design process. Introducing a few of these iterations produced results that performed better than those designed by the automated system alone, and the process was completed more quickly compared to the fully manual approach.

The results are reported this week in the journal Structural and Multidisciplinary Optimization, in a paper by MIT doctoral student Dat Ha and assistant professor of civil and environmental engineering Josephine Carstensen.

The basic approach can be applied to a broad range of scales and applications, Carstensen explains, for the design of everything from biomedical devices to nanoscale materials to structural support members of a skyscraper. Already, automated design systems have found many applications. “If we can make things in a better way, if we can make whatever we want, why not make it better?” she asks.

“It’s a way to take advantage of how we can make things in much more complex ways than we could in the past,” says Ha, adding that automated design systems have already begun to be widely used over the last decade in automotive and aerospace industries, where reducing weight while maintaining structural strength is a key need.

“You can take a lot of weight out of components, and in these two industries, everything is driven by weight,” he says. In some cases, such as internal components that aren’t visible, appearance is irrelevant, but for other structures, aesthetics may be important as well. The new system makes it possible to optimize designs for visual as well as mechanical properties, and in such decisions, the human touch is essential.

As a demonstration of their process in action, the researchers designed a number of structural load-bearing beams, such as might be used in a building or a bridge. In their iterations, they saw that the design has an area that could fail prematurely, so they selected that feature and required the program to address it. The computer system then revised the design accordingly, removing the highlighted strut and strengthening some other struts to compensate, and leading to an improved final design.

The process, which they call Human-Informed Topology Optimization, begins by setting out the needed specifications — for example, a beam needs to be this length, supported on two points at its ends, and must support this much of a load. “As we’re seeing the structure evolve on the computer screen in response to initial specification,” Carstensen says, “we interrupt the design and ask the user to judge it. The user can select, say, ‘I’m not a fan of this region, I’d like you to beef up or beef down this feature size requirement.’ And then the algorithm takes into account the user input.”

While the result is not as ideal as what might be produced by a fully rigorous yet significantly slower design algorithm that considers the underlying physics, she says it can be much better than a result generated by a rapid automated design system alone. “You don’t get something that’s quite as good, but that was not necessarily the goal. What we can show is that instead of using several hours to get something, we can use 10 minutes and get something much better than where we started off.”

The system can be used to optimize a design based on any desired properties, not just strength and weight. For example, it can be used to minimize fracture or buckling, or to reduce stresses in the material by softening corners.

Carstensen says, “We’re not looking to replace the seven-hour solution. If you have all the time and all the resources in the world, obviously you can run these and it’s going to give you the best solution.” But for many situations, such as designing replacement parts for equipment in a war zone or a disaster-relief area with limited computational power available, “then this kind of solution that catered directly to your needs would prevail.”

Similarly, for smaller companies manufacturing equipment in essentially “mom and pop” businesses, such a simplified system might be just the ticket. The new system they developed is not only simple and efficient to run on smaller computers, but it also requires far less training to produce useful results, Carstensen says. A basic two-dimensional version of the software, suitable for designing basic beams and structural parts, is freely available now online, she says, as the team continues to develop a full 3D version.

“The potential applications of Prof Carstensen’s research and tools are quite extraordinary,” says Christian Málaga-Chuquitaype, a professor of civil and environmental engineering at Imperial College London, who was not associated with this work. “With this work, her group is paving the way toward a truly synergistic human-machine design interaction.”

“By integrating engineering ‘intuition’ (or engineering ‘judgement’) into a rigorous yet computationally efficient topology optimization process, the human engineer is offered the possibility of guiding the creation of optimal structural configurations in a way that was not available to us before,” he adds. “Her findings have the potential to change the way engineers tackle ‘day-to-day’ design tasks.”

Reprinted with permission from MIT News ( http://news.mit.edu/ )

“Self-Boosting” Vaccines for a Myriad of Applications

Image: Second Bay Studios

Microparticles Could be Used to Deliver “Self-Boosting” Vaccines

Anne Trafton | MIT News Office  

Most vaccines, from measles to Covid-19, require a series of multiple shots before the recipient is considered fully vaccinated. To make that easier to achieve, MIT researchers have developed microparticles that can be tuned to deliver their payload at different time points, which could be used to create “self-boosting” vaccines.

In a new study, the researchers describe how these particles degrade over time, and how they can be tuned to release their contents at different time points. The study also offers insights into how the contents can be protected from losing their stability as they wait to be released.

Using these particles, which resemble tiny coffee cups sealed with a lid, researchers could design vaccines that would need to be given just once, and would then “self-boost” at a specified point in the future. The particles can remain under the skin until the vaccine is released and then break down, just like resorbable sutures.

This type of vaccine delivery could be particularly useful for administering childhood vaccinations in regions where people don’t have frequent access to medical care, the researchers say.

“This is a platform that can be broadly applicable to all types of vaccines, including recombinant protein-based vaccines, DNA-based vaccines, even RNA-based vaccines,” says Ana Jaklenec, a research scientist at MIT’s Koch Institute for Integrative Cancer Research. “Understanding the process of how the vaccines are released, which is what we described in this paper, has allowed us to work on formulations that address some of the instability that could be induced over time.”

This approach could also be used to deliver a range of other therapeutics, including cancer drugs, hormone therapy, and biologic drugs, the researchers say.

Jaklenec and Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, are the senior authors of the new study, which appears today in Science Advances. Morteza Sarmadi, a research specialist at the Koch Institute and recent MIT PhD recipient, is the lead author of the paper.

Staggered Drug Release

The researchers first described their new microfabrication technique for making these hollow microparticles in a 2017 Science paper. The particles are made from PLGA, a biocompatible polymer that has already been approved for use in medical devices such as implants, sutures, and prosthetic devices.

To create cup-shaped particles, the researchers create arrays of silicon molds that are used to shape the PLGA cups and lids. Once the array of polymer cups has been formed, the researchers employed a custom-built, automated dispensing system to fill each cup with a drug or vaccine. After the cups are filled, the lids are aligned and lowered onto each cup, and the system is heated slightly until the cup and lid fuse together, sealing the drug inside.

This technique, called SEAL (StampEd Assembly of polymer Layers), can be used to produce particles of any shape or size. In a paper recently published in the journal Small Methods, lead author Ilin Sadeghi, an MIT postdoc, and others created a new version of the technique that allows for simplified and larger-scale manufacturing of the particles.

In the new Science Advances study, the researchers wanted to learn more about how the particles degrade over time, what causes the particles to release their contents, and whether it might be possible to enhance the stability of the drugs or vaccines carried within the particles.

“We wanted to understand mechanistically what’s happening, and how that information can be used to help stabilize drugs and vaccines and optimize their kinetics,” Jaklenec says.

Their studies of the release mechanism revealed that the PLGA polymers that make up the particles are gradually cleaved by water, and when enough of these polymers have broken down, the lid becomes very porous. Very soon after these pores appear, the lid breaks apart, spilling out the contents.

“We realized that sudden pore formation prior to the release time point is the key that leads to this pulsatile release,” Sarmadi says. “We see no pores for a long period of time, and then all of a sudden we see a significant increase in the porosity of the system.”

The researchers then set out to analyze how a variety of design parameters, include the size and shape of the particles and the composition of the polymers used to make them, affect the timing of drug release.

To their surprise, the researchers found that particle size and shape had little effect on drug release kinetics. This sets the particles apart from most other types of drug delivery particles, whose size plays a significant role in the timing of drug release. Instead, the PLGA particles release their payload at different times based on differences in the composition of the polymer and the chemical groups attached the ends of the polymers.

“If you want the particle to release after six months for a certain application, we use the corresponding polymer, or if we want it to release after two days, we use another polymer,” Sarmadi says. “A broad range of applications can benefit from this observation.”

Stabilizing the Payload

The researchers also investigated how changes in environmental pH affect the particles. When water breaks down the PLGA polymers, the byproducts include lactic acid and glycolic acid, which make the overall environment more acidic. This can damage the drugs carried within the particles, which are usually proteins or nucleic acids that are sensitive to pH.

In an ongoing study, the researchers are now working on ways to counteract this increase in acidity, which they hope will improve the stability of the payload carried within the particles.

To help with future particle design, the researchers also developed a computational model that can take many different design parameters into account and predict how a particular particle will degrade in the body. This type of model could be used to guide the development of the type of PLGA particles that the researchers focused on in this study, or other types of microfabricated or 3D-printed particles or medical devices.

The research team has already used this strategy to design a self-boosting polio vaccine, which is now being tested in animals. Usually, the polio vaccine has to be given as a series of two to four separate injections.

“We believe these core shell particles have the potential to create a safe, single-injection, self-boosting vaccine in which a cocktail of particles with different release times can be created by changing the composition. Such a single injection approach has the potential to not only improve patient compliance but also increase cellular and humoral immune responses to the vaccine,” Langer says.

This type of drug delivery could also be useful for treating diseases such as cancer. In a 2020 Science Translational Medicine study, the researchers published a paper in which they showed that they could deliver drugs that stimulate the STING pathway, which promotes immune responses in the environment surrounding a tumor, in several mouse models of cancer. After being injected into tumors, the particles delivered several doses of the drug over several months, which inhibited tumor growth and reduced metastasis in the treated animals.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

3D-Printed Human Hearts (Patient Specific)

Image Credit: Melanie Gonick, MIT

Custom, 3D-Printed Heart Replicas Look and Pump Just Like the Real Thing

Jennifer Chu | MIT News Office

No two hearts beat alike. The size and shape of the heart can vary from one person to the next. These differences can be particularly pronounced for people living with heart disease, as their hearts and major vessels work harder to overcome any compromised function.

MIT engineers are hoping to help doctors tailor treatments to patients’ specific heart form and function, with a custom robotic heart. The team has developed a procedure to 3D print a soft and flexible replica of a patient’s heart. They can then control the replica’s action to mimic that patient’s blood-pumping ability.

The procedure involves first converting medical images of a patient’s heart into a three-dimensional computer model, which the researchers can then 3D print using a polymer-based ink. The result is a soft, flexible shell in the exact shape of the patient’s own heart. The team can also use this approach to print a patient’s aorta — the major artery that carries blood out of the heart to the rest of the body.

To mimic the heart’s pumping action, the team has fabricated sleeves similar to blood pressure cuffs that wrap around a printed heart and aorta. The underside of each sleeve resembles precisely patterned bubble wrap. When the sleeve is connected to a pneumatic system, researchers can tune the outflowing air to rhythmically inflate the sleeve’s bubbles and contract the heart, mimicking its pumping action.

The researchers can also inflate a separate sleeve surrounding a printed aorta to constrict the vessel. This constriction, they say, can be tuned to mimic aortic stenosis — a condition in which the aortic valve narrows, causing the heart to work harder to force blood through the body.

Doctors commonly treat aortic stenosis by surgically implanting a synthetic valve designed to widen the aorta’s natural valve. In the future, the team says that doctors could potentially use their new procedure to first print a patient’s heart and aorta, then implant a variety of valves into the printed model to see which design results in the best function and fit for that particular patient. The heart replicas could also be used by research labs and the medical device industry as realistic platforms for testing therapies for various types of heart disease.

“All hearts are different,” says Luca Rosalia, a graduate student in the MIT-Harvard Program in Health Sciences and Technology. “There are massive variations, especially when patients are sick. The advantage of our system is that we can recreate not just the form of a patient’s heart, but also its function in both physiology and disease.”

Rosalia and his colleagues report their results in a study appearing today in Science Robotics. MIT co-authors include Caglar Ozturk, Debkalpa Goswami, Jean Bonnemain, Sophie Wang, and Ellen Roche, along with Benjamin Bonner of Massachusetts General Hospital, James Weaver of Harvard University, and Christopher Nguyen, Rishi Puri, and Samir Kapadia at the Cleveland Clinic in Ohio.

Print and Pump

In January 2020, team members, led by mechanical engineering professor Ellen Roche, developed a “biorobotic hybrid heart” — a general replica of a heart, made from synthetic muscle containing small, inflatable cylinders, which they could control to mimic the contractions of a real beating heart.

Shortly after those efforts, the Covid-19 pandemic forced Roche’s lab, along with most others on campus, to temporarily close. Undeterred, Rosalia continued tweaking the heart-pumping design at home.

“I recreated the whole system in my dorm room that March,” Rosalia recalls.

Months later, the lab reopened, and the team continued where it left off, working to improve the control of the heart-pumping sleeve, which they tested in animal and computational models. They then expanded their approach to develop sleeves and heart replicas that are specific to individual patients. For this, they turned to 3D printing.

“There is a lot of interest in the medical field in using 3D printing technology to accurately recreate patient anatomy for use in preprocedural planning and training,” notes Wang, who is a vascular surgery resident at Beth Israel Deaconess Medical Center in Boston.

An Inclusive Design

In the new study, the team took advantage of 3D printing to produce custom replicas of actual patients’ hearts. They used a polymer-based ink that, once printed and cured, can squeeze and stretch, similarly to a real beating heart.

As their source material, the researchers used medical scans of 15 patients diagnosed with aortic stenosis. The team converted each patient’s images into a three-dimensional computer model of the patient’s left ventricle (the main pumping chamber of the heart) and aorta. They fed this model into a 3D printer to generate a soft, anatomically accurate shell of both the ventricle and vessel.

The team also fabricated sleeves to wrap around the printed forms. They tailored each sleeve’s pockets such that, when wrapped around their respective forms and connected to a small air pumping system, the sleeves could be tuned separately to realistically contract and constrict the printed models.

The researchers showed that for each model heart, they could accurately recreate the same heart-pumping pressures and flows that were previously measured in each respective patient.

“Being able to match the patients’ flows and pressures was very encouraging,” Roche says. “We’re not only printing the heart’s anatomy, but also replicating its mechanics and physiology. That’s the part that we get excited about.”

Going a step further, the team aimed to replicate some of the interventions that a handful of the patients underwent, to see whether the printed heart and vessel responded in the same way. Some patients had received valve implants designed to widen the aorta. Roche and her colleagues implanted similar valves in the printed aortas modeled after each patient. When they activated the printed heart to pump, they observed that the implanted valve produced similarly improved flows as in actual patients following their surgical implants.

Finally, the team used an actuated printed heart to compare implants of different sizes, to see which would result in the best fit and flow — something they envision clinicians could potentially do for their patients in the future.

“Patients would get their imaging done, which they do anyway, and we would use that to make this system, ideally within the day,” says co-author Nguyen. “Once it’s up and running, clinicians could test different valve types and sizes and see which works best, then use that to implant.”

Ultimately, Roche says the patient-specific replicas could help develop and identify ideal treatments for individuals with unique and challenging cardiac geometries.

“Designing inclusively for a large range of anatomies, and testing interventions across this range, may increase the addressable target population for minimally invasive procedures,” Roche says.

This research was supported, in part, by the National Science Foundation, the National Institutes of Health, and the National Heart Lung Blood Institute.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

Defects in the Endocannabinoid System and Disease Development

Image Credit: 1 Life Photography

People Produce Endocannabinoids – Similar to Compounds Found in Marijuana – Critical to Many Bodily Functions

Over the past two decades, a great deal of attention has been given to marijuana – also known as pot or weed. As of early 2023, marijuana has been legalized for recreational use in 21 states and Washington, D.C., and the use of marijuana for medical purposes has grown significantly during the last 20 or so years.

But few people know that the human body naturally produces chemicals that are very similar to delta-9-tetrahydrocannabinol, or THC, the psychoactive compound in marijuana, which comes from the Cannabis sativa plant. These substances are called endocannabinoids, and they’re found across all vertebrate species.

Evolutionarily, the appearance of endocannabinoids in vertebrate animals predates that of Cannabis sativa by about 575 million years.

It is as if the human body has its own version of a marijuana seedling inside, constantly producing small amounts of endocannabinoids.

The similarity of endocannabinoids to THC, and their importance in maintaining human health, have raised significant interest among scientists to further study their role in health and disease, and potentially use them as therapeutic targets to treat human diseases.

THC was first identified in 1964, and is just one of more than 100 compounds found in marijuana that are called cannabinoids.

Endocannabinoids were not discovered until 1992. Since then, research has revealed that they are critical for many important physiological functions that regulate human health. An imbalance in the production of endocannabinoids, or in the body’s responsiveness to them, can lead to major clinical disorders, including obesity as well as neurodegenerative, cardiovascular and inflammatory diseases.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Prakash Nagarkatti, Professor of Pathology, Microbiology and Immunology, University of South Carolina and Mitzi Nagarkatti, Professor of Pathology, Microbiology and Immunology, University of South Carolina.

We are immunologists who have been studying the effects of marijuana cannabinoids and vertebrate endocannabinoids on inflammation and cancer for more than two decades. Research in our laboratory has shown that endocannabinoids regulate inflammation and other immune functions.

What is the Endocannabinoid System?

A variety of tissues in the body, including brain, muscle, fatty tissue and immune cells, produce small quantities of endocannabinoids. There are two main types of endocannabinoids: anandamide, or AEA, and 2-arachidonoyl glycerol, known as 2-AG. Both of them can activate the body’s cannabinoid receptors, which receive and process chemical signals in cells.

One of these receptors, called CB1, is found predominantly in the brain. The other, called CB2, is found mainly in immune cells. It is primarily through the activation of these two receptors that endocannabinoids control many bodily functions.

The receptors can be compared to a “lock” and the endocannabinoids a “key” that can open the lock and gain entry into the cells. All these endocannabinoid receptors and molecules together are referred to as the endocannabinoid system.

The cannabis plant contains another compound called cannabidiol, or CBD, which has become popular for its medicinal properties. Unlike THC, CBD doesn’t have psychoactive properties because it does not activate CB1 receptors in the brain. Nor does it activate the CB2 receptors, meaning that its action on immune cells is independent of CB2 receptors.

Endocannabinoid receptors are found throughout most of the human body

Role of Endocannabinoids in the Body

The euphoric “high” feeling that people experience when using marijuana comes from THC activating the CB1 receptors in the brain.

But when endocannabinoids activate CB1 receptors, by comparison, they do not cause a marijuana high. One reason is that the body produces them in smaller quantities than the typical amount of THC in marijuana. The other is that certain enzymes break them down rapidly after they carry out their cellular functions.

However, there is growing evidence that certain activities may release mood-elevating endocannabinoids. Some research suggests that the relaxed, euphoric feeling you get after exercise, called a “runner’s high,” results from the release of endocannabinoids rather than from endorphins, as previously thought.

The endocannabinoids regulate several bodily functions such as sleep, mood, appetite, learning, memory, body temperature, pain, immune functions and fertility. They control some of these functions by regulating nerve cell signaling in the brain. Normally, nerve cells communicate with one another at junctions called synapses. The endocannabinoid system in the brain regulates this communication at synapses, which explains its ability to affect a wide array of bodily functions.

The Elixir of Endocannabinoids

Research in our laboratory has shown that certain cells of the immune system produce endocannabinoids that can regulate inflammation and other immune functions through the activation of CB2 receptors.

In addition, we have shown that endocannabinoids are highly effective in lessening the debilitating effects of autoimmune diseases. These are diseases in which the immune system goes haywire and starts destroying the body’s organs and tissues. Examples include multiple sclerosis, lupus, hepatitis and arthritis.

Recent research suggests that migraine, fibromyalgia, irritable bowel syndrome, post-traumatic stress disorder and bipolar disease are all linked to low levels of endocannabinoids.

In a 2022 study, researchers found that a defect in a gene that helps produce endocannabinoids causes early onset of Parkinson’s disease. Another 2022 study linked the same gene defect to other neurological disorders, including developmental delay, poor muscle control and vision problems.

Other research has shown that people with a defective form of CB1 receptors experience increased pain sensitivity such as migraine headaches and suffer from sleep and memory disorders and anxiety.

The endocannabinoid system – consisting of the endocannabinoids and the cannabinoid receptors – regulates nerve cell communication at the synapse, thereby playing a role in a variety of bodily functions. 

The Likeness Between Marijuana and Endocannabinoids

We believe that the medicinal properties of THC may be linked to the molecule’s ability to compensate for a deficiency or defect in the production or functions of the endocannabinoids.

For example, scientists have found that people who experience certain types of chronic pain may have decreased production of endocannabinoids. People who consume marijuana for medicinal purposes report significant relief from pain. Because the THC in marijuana is the cannabinoid that reduces pain, it may be helping to compensate for the decreased production or functions of endocannabinoids in such patients.

Deciphering the role of endocannabinoids is still an emerging area of health research. Certainly much more research is needed to decipher their role in regulating different functions in the body.

In our view, it will also be important to continue to unravel the relationship between defects in the endocannabinoid system and the development of various diseases and clinical disorders. We think that the answers could hold great promise for the development of new therapies using the body’s own cannabinoids.

AI and the U.S. Military’s Unmanned Technological Edge

Image: Marine Corps Warfighting Laboratory MAGTAF Integrated Experiment (MCWL) 160709-M-OB268-165.jpg

War in Ukraine Accelerates Global Drive Toward Killer Robots

The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a Department of Defense directive. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related implementation plan released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.”

Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in Ukraine and Nagorno-Karabakh: Weaponized artificial intelligence is the future of warfare.

“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of Article 36, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said.

Pressure of War

But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.

This month, a key Russian manufacturer announced plans to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to defend Ukrainian energy facilities from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous Switchblade drone, said the technology is already within reach to convert these weapons to become fully autonomous.

Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “logical and inevitable next step” and recently said that soldiers might see them on the battlefield in the next six months.

Proponents of fully autonomous weapons systems argue that the technology will keep soldiers out of harm’s way by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities.

Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, James Dawes, Professor, Macalester College.

By contrast, fully autonomous drones, like the so-called “drone hunters” now deployed in Ukraine, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems.

Calling for a Timeout

Critics like The Campaign to Stop Killer Robots have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of digital dehumanization.

Together with Human Rights Watch, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield.

This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings. U.S. Army AMRDEC Public Affairs

The organizations argue that the militaries investing most heavily in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the hands of terrorists and others outside of government control.

The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “appropriate levels of human judgment over the use of force.” Human Rights Watch issued a statement saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.

But as Gregory Allen, an expert from the national defense and international relations think tank Center for Strategic and International Studies, argues, this language establishes a lower threshold than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.”

The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy.

The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.

Impossible Balance?

The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen.

The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “cannot be transferred to a machine, algorithm or weapon system.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.

If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.

Preparing Students for the New Nuclear

Image: Student Santiago Andrade interning at Caterpillar

Not Your Grandfathers Nuclear Reactor – Educating the Needed Nuclear Talent Pool

Human infrastructure is critical to any industry. Building the foundations so a new or quickly expanding technology can begin to flourish requires foresight. This forward planning also requires schools to recognize a need, student interest, and old and new industries then understanding that internships are two-way streets that benefit both the young, and also the entrenched. MIT created a unique program for students in the field of nuclear power generation. Below is an article republished from their website on the success of one of their programs. – Paul Hoffman, Channelchek

Kara Baskin | MIT News

As nuclear power has gained greater recognition as a zero-emission energy source, the MIT Leaders for Global Operations (LGO) program has taken notice. Two years ago, LGO began a collaboration with MIT’s Department of Nuclear Science and Engineering (NSE) as a way to showcase the vital contribution of both business savvy and scientific rigor that LGO’s dual-degree graduates can offer this growing field.

“We saw that the future of fission and fusion required business acumen and management acumen,” says Professor Anne White, NSE department head. “People who are going to be leaders in our discipline, and leaders in the nuclear enterprise, are going to need all of the technical pieces of the puzzle that our engineering department can provide in terms of education and training. But they’re also going to need a much broader perspective on how the technology connects with society through the lens of business.”

The resulting response has been positive: “Companies are seeing the value of nuclear technology for their operations,” White says, and this often happens in unexpected ways.

For example, graduate student Santiago Andrade recently completed a research project at Caterpillar Inc., a preeminent manufacturer of mining and construction equipment. Caterpillar is one of more than 20 major companies that partner with the LGO program, offering six-month internships to each student. On the surface, it seemed like an improbable pairing; what could Andrade, who was pursuing his master’s in nuclear science and engineering, do for a manufacturing company? However, Caterpillar wanted to understand the technical and commercial feasibility of using nuclear energy to power mining sites and data centers when wind and solar weren’t viable.

“They are leaving no stone unturned in the search of financially smart solutions that can support the transition to a clean energy dependency,” Andrade says. “My project, along with many others’, is part of this effort.”

“The research done through the LGO program with Santiago is enabling Caterpillar to understand how alternative technologies, like the nuclear microreactor, could participate in these markets in the future,” says Brian George, product manager for large electric power solutions at Caterpillar. “Our ability to connect our customers with the research will provide for a more accurate understanding of the potential opportunity, and helps provide exposure for our customers to emerging technologies.”

With looming threats of climate change, White says, “We’re going to require more opportunities for nuclear technologies to step in and be part of those solutions. A cohort of LGO graduates will come through this program with technical expertise — a master’s degree in nuclear engineering — and an MBA. There’s going to be a tremendous talent pool out there to help companies and governments.”

Andrade, who completed an undergraduate degree in chemical engineering and had a strong background in thermodynamics, applied to LGO unsure of which track to choose, but he knew he wanted to confront the world’s energy challenge. When MIT Admissions suggested that he join LGO’s new nuclear track, he was intrigued by how it could further his career.

“Since the NSE department offers opportunities ranging from energy to health care and from quantum engineering to regulatory policy, the possibilities of career tracks after graduation are countless,” he says.

He was also inspired by the fact that, as he says, “Nuclear is one of the less-popular solutions in terms of our energy transition journey. One of the things that attracted me is that it’s not one of the most popular, but it’s one of the most useful.”

In addition to his work at Caterpillar, Andrade connected deeply with professors. He worked closely with professors Jacopo Buongiorno and John Parsons as a research assistant, helping them develop a business model to successfully support the deployment of nuclear microreactors. After graduation, he plans to work in the clean energy sector with an eye to innovations in the nuclear energy technology space.

His LGO classmate, Lindsey Kennington, a control systems engineer, echoes his sentiments: This is a revolutionary time for nuclear technology.

“Before MIT, I worked on a lot of nuclear waste or nuclear weapons-related projects. All of them were fission-related. I got disillusioned because of all the bureaucracy and the regulation,” Kennington says. “However, now there are a lot of new nuclear technologies coming straight out of MIT. Commonwealth Fusion Systems, a fusion startup, represents a prime example of MIT’s close relationship to new nuclear tech. Small modular reactors are another emerging technology being developed by MIT. Exposure to these cutting-edge technologies was the main sell factor for me.”

Kennington conducted an internship with National Grid, where she used her expertise to evaluate how existing nuclear power plants could generate hydrogen. At MIT, she studied nuclear and energy policy, which offered her additional perspective that traditional engineering classes might not have provided. Because nuclear power has long been a hot-button issue, Kennington was able to gain nuanced insight about the pathways and roadblocks to its implementation.

“I don’t think that other engineering departments emphasize that focus on policy quite as much. [Those classes] have been one of the most enriching parts of being in the nuclear department,” she says.

Most of all, she says, it’s a pivotal time to be part of a new, blossoming program at the forefront of clean energy, especially as fusion research grows more prevalent.

“We’re at an inflection point,” she says. “Whether or not we figure out fusion in the next five, 10, or 20 years, people are going to be working on it — and it’s a really exciting time to not only work on the science but to actually help the funding and business side grow.”

White puts it simply.

“This is not your parents’ nuclear,” she says. “It’s something totally different. Our discipline is evolving so rapidly that people who have technical expertise in nuclear will have a huge advantage in this next generation.”

Reprinted with permission from MIT News ( http://news.mit.edu/ )