Microglia, the “Janitors” of the Brain Show Promise Treating Neurodegenerative Disorders

Image Credit: NIH (Flickr)

Harnessing the Brain’s Immune Cells to Stave off Alzheimer’s and Other Neurodegenerative Diseases

Many neurodegenerative diseases, or conditions that result from the loss of function or death of brain cells, remain largely untreatable. Most available treatments target just one of the multiple processes that can lead to neurodegeneration, which may not be effective in completely addressing disease symptoms or progress, if at all.

But what if researchers harnessed the brain’s inherent capabilities to cleanse and heal itself? My colleagues and I in the Lukens Lab at the University of Virginia believe that the brain’s own immune system may hold the key to neurodegenerative disease treatment. In our research, we found a protein that could possibly be leveraged to help the brain’s immune cells, or microglia, stave off Alzheimer’s disease.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Kristine Zengeler, Ph.D. Candidate in Neuroscience, University of Virginia.

Challenges in Treating Neurodegeneration

No available treatments for neurodegenerative diseases stop ongoing neurodegeneration while also helping affected areas in the body heal and recuperate.

In terms of failed treatments, Alzheimer’s disease is perhaps the most infamous of neurodegenerative diseases. Affecting more than 1 in 9 U.S. adults 65 and older, Alzheimer’s results from brain atrophy with the death of neurons and loss of the connections between them. These casualties contribute to memory and cognitive decline. Billions of dollars have been funneled into researching treatments for Alzheimer’s, but nearly every drug tested to date has failed in clinical trials.

Another common neurodegenerative disease in need of improved treatment options is multiple sclerosis. This autoimmune condition is caused by immune cells attacking the protective cover on neurons, known as myelin. Degrading myelin leads to communication difficulties between neurons and their connections with the rest of the body. Current treatments suppress the immune system and can have potentially debilitating side effects. Many of these treatment options fail to address the toxic effects of the myelin debris that accumulate in the nervous system, which can kill cells.

A New Frontier in Treating Neurodegeneration

Microglia are immune cells masquerading as brain cells. In mice, microglia originate in the yolk sac of an embryo and then infiltrate the brain early in development. The origins and migration of microglia in people are still under study.

Microglia play important roles in healthy brain function. Like other immune cells, microglia respond rapidly to pathogens and damage. They help to clear injuries and mend afflicted tissue, and can also take an active role in fighting pathogens. Microglia can also regulate brain inflammation, a normal part of the immune response that can cause swelling and damage if left unchecked.

Microglia also support the health of other brain cells. For instance, they can release molecules that promote resilience, such as the protein BDNF, which is known to be beneficial for neuron survival and function.

But the keystone feature of microglia are their astounding janitorial skills. Of all brain cell types, microglia possess an exquisite ability to clean up gunk in the brain, including the damaged myelin in multiple sclerosis, pieces of dead cells and amyloid beta, a toxic protein that is a hallmark of Alzheimer’s. They accomplish this by consuming and breaking down debris in their environment, effectively eating up the garbage surrounding them and their neighboring cells.

Given the many essential roles microglia serve to maintain brain function, these cells may possess the capacity to address multiple arms of neurodegeneration-related dysfunction. Moreover, as lifelong residents of the brain, microglia are already educated in the best practices of brain protection. These factors put microglia in the perfect position for researchers to leverage their inherent abilities to protect against neurodegeneration.

New data in both animal models and human patients points to a previously underappreciated role microglia also play in the development of neurodegenerative disease. Many genetic risk factors for diseases like Alzheimer’s and multiple sclerosis are strongly linked to abnormal microglia function. These findings support an accumulating number of animal studies suggesting that disruptions to microglial function may contribute to neurologic disease onset and severity.

This raises the next logical question: How can researchers harness microglia to protect the nervous system against neurodegeneration?

Engaging the Magic of Microglia

In our lab’s recent study, we keyed in on a crucial protein called SYK that microglia use to manipulate their response to neurodegeneration.

Our collaborators found that microglia dial up the activity of SYK when they encounter debris in their environment, such as amyloid beta in Alzheimer’s or myelin debris in multiple sclerosis. When we inhibited SYK function in microglia, we found that twice as much amyloid beta accumulated in Alzheimer’s mouse models and six times as much myelin debris in multiple sclerosis mouse models.

Blocking SYK function in the microglia of Alzheimer’s mouse models also worsened neuronal health, indicated by increasing levels of toxic neuronal proteins and a surge in the number of dying neurons. This correlated with hastened cognitive decline, as the mice failed to learn a spatial memory test. Similarly, impairing SYK in multiple sclerosis mouse models exacerbated motor dysfunction and hindered myelin repair. These findings indicate that microglia use SYK to protect the brain from neurodegeneration.

But how does SYK protect the nervous system against damage and degeneration? We found that microglia use SYK to migrate toward debris in the brain. It also helps microglia remove and destroy this debris by stimulating other proteins involved in cleanup processes. These jobs support the idea that SYK helps microglia protect the brain by charging them to remove toxic materials.

Finally, we wanted to figure out if we could leverage SYK to create “super microglia” that could help clean up debris before it makes neurodegeneration worse. When we gave mice a drug that boosted SYK function, we found that Alzheimer’s mouse models had lower levels of plaque accumulation in their brains one week after receiving the drug. This finding points to the potential of increasing microglia activity to treat Alzheimer’s disease.

The Horizon of Microglia Treatments

Future studies will be necessary to see whether creating a super microglia cleanup crew to treat neurodegenerative diseases is beneficial in people. But our results suggest that microglia already play a key role in preventing neurodegenerative diseases by helping to remove toxic waste in the nervous system and promoting the healing of damaged areas.

It’s possible to have too much of a good thing, though. Excessive inflammation driven by microglia could make neurologic disease worse. We believe that equipping microglia with the proper instructions to carry out their beneficial functions without causing further damage could one day help treat and prevent neurodegenerative disease.

Demand for Tankers Causes Shipping Rates to Explode

Oil Tanker Day Rates To Be Supported By The EU’s Ban On Russian Crude

Over the past 12 months, global container shipping rates have steadily declined to their long-term averages as supply chain snarls have receded and backups at ports have disappeared.

Now, another segment of the cargo shipping industry is seeing day rates explode to record highs.

So-called dirty tankers, those that carry crude oil, are charging over $100,000 a day for their services as international sanctions against Russia force ships—including Suezmaxes, Aframaxes and very large crude carriers (VLCCs)—to take longer, more circuitous routes. Carriers that once made deliveries to the North Sea port of Rotterdam via the Baltic Sea are now having to sail to China, India and Turkey, which are twice or three times the distance. All three Asian countries have said they will continue to buy Russian oil.

The Baltic Exchange Dirty Tanker Index, which measures shipping rates on 12 international routes, rose as much as 243% for the 12-month period through the end of November.

So how high could rates go? According to Omar Nokta, a shipping analyst at Jefferies, they could potentially climb to between $150,000 and $200,000 a day.

We’re almost there now. The Aframax day rate to ship oil from the Black Sea to the Mediterranean hit an astronomical $145,000 a day during the week ended November 18, according to Compass Maritime.

Russia Oil Turmoil To Drive Tanker Market Higher

This week, the 27 countries of the European Union (EU) officially banned crude imports from Russia, the world’s number two producer, and on February 5, 2023, all Russian oil products will be banned. This will have the effect of disrupting global trade routes further, driving up rates even more.

As you can see below, Europe’s imports of Russian oil were already down dramatically from the beginning of 2022, when the country invaded Ukraine. Before the ban, the Netherlands was the only remaining European destination for deliveries outside of the Mediterranean and Black Sea basin. To help offset the loss of Russian supply, Norway will ship a record volume of North Sea oil in January, Bloomberg reports.

Oil Tankers Generating Record Revenues, Stocks Hitting New Highs

Due to changes in shipping routes, demand for oil tankers is expected to surge to levels not seen in three decades, according to Clarkson Research. The U.K.-based group is forecasting that the number of ton-miles, defined as one ton of freight shipped one mile, could increase 9.5% next year. That would mark the largest annual increase since 1993.

Volumes are already at pre-pandemic levels, with VLCCs and Aframaxes having exceeded 2019 volumes for the first time since the second quarter of 2020.

Oil Tankers Generating Record Revenues, Stocks Hitting New Highs

Also supporting rates is the fact that oil carriers are replacing vessels at a historically low pace.

In July, Clarksons reported that new shipbuilding orders for container vessels had surpassed those for tankers for the first time ever. Whereas the global order book for containerships stood at 72.5 million deadweight tonnage (dwt)—a measure for how much weight a ship can carrier—the orderbook for crude oil and oil product tankers was 34 million dwt, a new record low.

This has contributed to massive revenues and net income, which should keep carriers in a strong position even as container rates have dried up. Last week, Mitsui O.S.K. Lines president Takeshi Hashimoto told JPMorgan analysts that he believes profits will remain strong due to the company’s liquified natural gas (LNG) business as well as dry bulkers and tankers.

Frontline, the fourth largest oil tanker company, just reported net income of $154.4 million in the third quarter, compared to $108.5 million estimated.

Below you can see how revenues have surged for companies such as International Seaways, Ardmore Shipping and Scorpio Tankers.

With the S&P 500 still down more than 14% for the year, shares of a number of oil tankers have hit new all-time highs in recent days. Among those are Ardmore, International Seaways and Euronav. Teekay Tankers was up 226% year-to-date, while Scorpio was up 323% over the same period.

Will these returns continue? I can’t say, of course, but the structural support doesn’t appear to be going away anytime soon.

The Baltic Dirty Tanker Index is made up from 12 routes taken from the Baltic International Tanker Routes. The S&P 500 is widely regarded as the best single gauge of large-cap U.S. equities and serves as the foundation for a wide range of investment products. The index includes 500 leading companies and captures approximately 80% coverage of available market capitalization.

Holdings may change daily. Holdings are reported as of the most recent quarter-end. The following securities mentioned in the article were held by one or more accounts managed by U.S. Global Investors as of (09/30/22): Mitsui OSK Lines Ltd.

This article was republished with permission from Frank Talk, a CEO Blog by Frank Holmes of U.S. Global Investors (GROW). Find more of Frank’s articles here – Originally published October 21, 2021

US Global Investors Disclaimer

All opinions expressed and data provided are subject to change without notice. Some of these opinions may not be appropriate to every investor. By clicking the link(s) above, you will be directed to a third-party website(s). U.S. Global Investors does not endorse all information supplied by this/these website(s)  and is not responsible for its/their content.

The Baltic Dirty Tanker Index is made up from 12 routes taken from the Baltic International Tanker Routes. The S&P 500 is widely regarded as the best single gauge of large-cap U.S. equities and serves as the foundation for a wide range of investment products. The index includes 500 leading companies and captures approximately 80% coverage of available market capitalization.

Holdings may change daily. Holdings are reported as of the most recent quarter-end. The following securities mentioned in the article were held by one or more accounts managed by U.S. Global Investors as of (09/30/22): Mitsui OSK Lines Ltd.

The Darknet Supply Chain Creates Risk for Most All Online Transactions

Image Credit: Richard Patterson (Flickr)

Darknet Markets Generate Millions in Revenue Selling Stolen Personal Data, Supply Chain Study Finds

It is common to hear news reports about large data breaches, but what happens once your personal data is stolen? Our research shows that, like most legal commodities, stolen data products flow through a supply chain consisting of producers, wholesalers and consumers. But this supply chain involves the interconnection of multiple criminal organizations operating in illicit underground marketplaces.

The stolen data supply chain begins with producers – hackers who exploit vulnerable systems and steal sensitive information such as credit card numbers, bank account information and Social Security numbers. Next, the stolen data is advertised by wholesalers and distributors who sell the data. Finally, the data is purchased by consumers who use it to commit various forms of fraud, including fraudulent credit card transactions, identity theft and phishing attacks.

This trafficking of stolen data between producers, wholesalers and consumers is enabled by darknet markets, which are websites that resemble ordinary e-commerce websites but are accessible only using special browsers or authorization codes.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Christian Jordan Howell

Assistant Professor in Cybercrime, University of South Florida and David Maimon, Professor of Criminal Justice and Criminology, Georgia State University.

We found several thousand vendors selling tens of thousands of stolen data products on 30 darknet markets. These vendors had more than US$140 million in revenue over an eight-month period.

The stolen data supply chain, from data theft to fraud. Christian Jordan Howell, CC BY-ND

Darknet Markets

Just like traditional e-commerce sites, darknet markets provide a platform for vendors to connect with potential buyers to facilitate transactions. Darknet markets, though, are notorious for the sale of illicit products. Another key distinction is that access to darknet markets requires the use of special software such as the Onion Router, or TOR, which provides security and anonymity.

Silk Road, which emerged in 2011, combined TOR and bitcoin to become the first known darknet market. The market was eventually seized in 2013, and the founder, Ross Ulbricht, was sentenced to two life sentences plus 40 years without the possibility of parole. Ulbricht’s hefty prison sentence did not appear to have the intended deterrent effect. Multiple markets emerged to fill the void and, in doing so, created a thriving ecosystem profiting from stolen personal data.

Example of a stolen data ‘product’ sold on a darknet market. Screenshot by Christian Jordan Howell, CC BY-ND

Stolen Data Ecosystem

Recognizing the role of darknet markets in trafficking stolen data, we conducted the largest systematic examination of stolen data markets that we are aware of to better understand the size and scope of this illicit online ecosystem. To do this, we first identified 30 darknet markets advertising stolen data products.

Next, we extracted information about stolen data products from the markets on a weekly basis for eight months, from Sept. 1, 2020, through April 30, 2021. We then used this information to determine the number of vendors selling stolen data products, the number of stolen data products advertised, the number of products sold and the amount of revenue generated.

In total, there were 2,158 vendors who advertised at least one of the 96,672 product listings across the 30 marketplaces. Vendors and product listings were not distributed equally across markets. On average, marketplaces had 109 unique vendor aliases and 3,222 product listings related to stolen data products. Marketplaces recorded 632,207 sales across these markets, which generated $140,337,999 in total revenue. Again, there is high variation across the markets. On average, marketplaces had 26,342 sales and generated $5,847,417 in revenue.

The size and scope of the stolen data ecosystem over an eight-month period. Christian Jordan Howell, CC BY-ND

After assessing the aggregate characteristics of the ecosystem, we analyzed each of the markets individually. In doing so, we found that a handful of markets were responsible for trafficking most of the stolen data products. The three largest markets – Apollon, WhiteHouse and Agartha – contained 58% of all vendors. The number of listings ranged from 38 to 16,296, and the total number of sales ranged from 0 to 237,512. The total revenue of markets also varied substantially during the 35-week period: It ranged from $0 to $91,582,216 for the most successful market, Agartha.

For comparison, most midsize companies operating in the U.S. earn between $10 million and $1 billion annually. Both Agartha and Cartel earned enough revenue within the 35-week period we tracked them to be characterized as midsize companies, earning $91.6 million and $32.3 million, respectively. Other markets like Aurora, DeepMart and White House were also on track to reach the revenue of a midsize company if given a full year to earn.

Our research details a thriving underground economy and illicit supply chain enabled by darknet markets. As long as data is routinely stolen, there are likely to be marketplaces for the stolen information.

These darknet markets are difficult to disrupt directly, but efforts to thwart customers of stolen data from using it offers some hope. We believe that advances in artificial intelligence can provide law enforcement agencies, financial institutions and others with information needed to prevent stolen data from being used to commit fraud. This could stop the flow of stolen data through the supply chain and disrupt the underground economy that profits from your personal data.

In the Womb a Genetic Battle Sometimes Rages

Rotterdams Ballonenbed (Flickr)

Pregnancy is a Genetic Battlefield – How Conflicts of Interest Pit Mom’s and Dad’s Genes Against Each Other

Baby showers. Babymoons. Baby-arrival parties. There are many opportunities to celebrate the 40-week transition to parenthood. Often, these celebrations implicitly assume that pregnancy is cooperative and mutually beneficial to both the parent and the fetus. But this belief obscures a more interesting truth about pregnancy – the mother and the fetus may not be peacefully coexisting in the same body at all.

At the most fundamental level, there is a conflict between the interests of the parent and fetus. While this may sound like the beginning of a thriller, this genetic conflict is a normal part of pregnancy, leading to typical growth and development both during pregnancy and across an individual’s lifetime – something my research focuses on.

However, even though genetic conflict is normal during pregnancy, it can play a role in pregnancy complications and developmental disorders when left unchecked.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Jessica D. Ayers, Assistant Professor of Psychological Science, Boise State University.

What is Genetic Conflict?

Pregnancy is generally thought of as a period when a new individual is created from a unified blend of genes from their parents. But this is not quite right.

The genes a fetus gets from each parent carry slightly different instructions for development. This means there are contrasting and sometimes conflicting blueprints for how to build the new individual. Conflict over which blueprint to follow for fetal growth and development is the essence of the genetic conflict that occurs during pregnancy.

Moms have to use their bodies to help the fetus grow during pregnancy while dads don’t. This means that the genes the fetus inherits from mom have to not only provide for the current fetus, but also try to keep mom alive and healthy and make sure there are resources left over for a potential future pregnancy. These reserves include both biological resources like glucose, protein, iron and calcium, as well as the time and energy needed to help her children after birth as they grow and develop.

Dad’s genes don’t have this same pressure because they don’t use their bodies to help the fetus grow during pregnancy. A dad’s genes, then, don’t need to ensure that anyone other than the current fetus thrives.

To better understand this situation, pretend that all of the resources a mom can give her children come in the form of a milkshake. Once the milkshake runs out, mom has nothing left to give her children. Maternal genes, therefore, want each child to drink only as much as they need to grow and develop. This ensures that the milkshake can be “shared” across all current and future children.

Paternal genes, on the other hand, have no such guarantee of representation in this mother’s other children – the father of the current child may not be the father of the mother’s potential future children. This lack of guaranteed genetic representation means there is no pressure on the father to “share” the milkshake. The best strategy when it comes to paternal genes, then, is for the fetus to drink as much of the milkshake as they can.

These two strategies play a figurative game of tug of war throughout pregnancy. Both sides are trying to pull fetal development slightly more toward their side. Paternal genes encourage the fetus to grow and develop quickly and take more resources, while maternal genes encourage the fetus to grow and use only what’s necessary for proper development. Conflict over how deeply the embryo implants in the uterus and how quickly the placenta and fetus grow are just a few areas where researchers have documented this tug of war during pregnancy.

The milkshake problem helps researchers determine where to look for genetic conflict by simplifying where trade-offs may take place during pregnancy. Because fetal growth is at the heart of genetic conflict, researchers have focused on processes where conflict over resource transfers from mother to fetus can be observed. These investigations have found that the placenta, a fetal organ responsible for all resource transfers during pregnancy, is dominated by paternally-expressed genes. It releases paternally-derived insulin-like growth factors that make mom less sensitive to her own insulin and hormones that increase maternal blood pressure, both of which ultimately increase the amount of resources the fetus can use to grow during pregnancy but have the potential to harm the mother’s health.

Genetic Conflict and Pregnancy Complications

If genetic conflict goes uncontrolled, it can cause pregnancy complications for the mother and developmental disorders for the child. In fact, there is a growing consensus among researchers that some of the most well-known pregnancy complications like preeclampsia, gestational diabetes, miscarriages and preterm births may best be explained by unchecked genetic conflict.

Despite the potential role that genetic conflict plays in pregnancy complications, current medical treatments are reactive rather than proactive. A pregnant person must show signs of experiencing complications before medical interventions and treatments can take place.

Knowing how unchecked genetic conflict contributes to pregnancy complications could provide researchers another way to develop treatments that are proactive and, ideally, preventive. However, there are currently no treatments for pregnancy complications that consider genetic conflict. Though gestational diabetes can be attributed to underlying genetic conflict, a pregnant person must present with elevated blood sugar levels before doctors can treat underlying conflict over insulin production and blood sugar.

Pregnancy During the COVID-19 Pandemic has Been Challenging for Many

The experiences of pregnant people during the COVID-19 pandemic provide an example of why more research on genetic conflict is needed. During the pandemic, doctors saw both a dramatic decrease in the number of preterm births as well as an increase in the number of stillbirths and miscarriages. Both types of complications are influenced by genetic conflict, but the reasons behind these opposing trends are unclear.

As a woman who was pregnant early in the pandemic, my pregnancy was scary and stressful, spent at home away from the pressures of “normal” life. More research on the complex process of pregnancy and genetic conflict’s role in complications could help researchers better understand how the changes brought by the pandemic produced such wildly different pregnancy outcomes.

The Success Rate of Noninvasive Transcranial Magnetic Stimulation for Depression

Image Credit: NIH (Flickr)

Patients Suffering with Hard-to-Treat Depression May Get Relief from Noninvasive Magnetic Brain Stimulation

Not only is depression a debilitating disease, but it is also widespread. Approximately 20 million adult Americans experience at least one episode of depression per year.

Millions of them take medication to treat their depression. But for many, the medications don’t work: Either they have minimal or no effect, or the side effects are intolerable. These patients have what is called treatment-resistant depression.

One promising treatment for such patients is a type of brain stimulation therapy called transcranial magnetic stimulation.

This treatment is not new; it has been around since 1995. The U.S. Food and Drug Administration cleared transcranial magnetic stimulation in 2008 for adults with “non-psychotic treatment-resistant depression,” which is typically defined as a failure to respond to two or more antidepressant medications. More recently, in 2018, the FDA cleared it for some patients with obsessive-compulsive disorder and smoking cessation.

Insurance generally covers these treatments. Both the psychiatrist and the equipment operator must be certified. While the treatment has been available for years, the equipment to perform the procedure remains expensive enough that few private psychiatry practices can afford it. But with the growing recognition of the potential of transcranial magnetic stimulation, the price will likely eventually come down and access will be greatly expanded.

Does it Work?

Transcranial magnetic stimulation is a noninvasive, pain-free procedure that has minimal to no side effects, and it often works. Research shows that 58% of once treatment-resistant patients experience a significant reduction in depression following four to six rounds of the therapy. More than 40 independent clinical trials – with more than 2,000 patients worldwide – have demonstrated that repetitive transcranial magnetic stimulation is an effective therapy for the treatment of resistant major depression.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Patricia Junquera, Associate Professor and Vice Chair of Clinical Services, Florida International University.

As a professor and psychiatrist who has used transcranial magnetic stimulation to treat some of my patients, I have seen depression symptoms decrease even within the first two weeks of treatment. What’s more, the effects continue after the treatment has ended, typically for six months to a year. After that, the patient has the option of maintenance treatment.

About the Procedure

For the patient, the procedure is easy and simple. One sits in a comfortable chair with a snug pillow that holds their head in place, puts on earplugs and can then relax, check their phone, watch TV or read a book.

A treatment coil, which looks like a figure 8, is placed on the patient’s head. A nearby stimulator sends an electrical current to the coil, which transforms the current into a magnetic field.

The field, which is highly concentrated, turns on and off rapidly while targeting a portion of the prefrontal cortex – the area of the brain responsible for mood regulation.

Researchers know that people suffering from depression have reduced blood flow and less activity in that part of the brain. Transcranial magnetic stimulation causes increases in both blood flow and in the levels of dopamine and glutamate – two neurotransmitters that are responsible for brain functions like concentration, memory and sleep. It’s the repeated stimulation of this area – the “depression circuit” of the brain – that brings the antidepressant effect.

It is Not ‘Electroshock’ or Deep Brain Stimulation

Some people confuse transcranial magnetic stimulation with electroconvulsive therapy, a procedure used for patients with severe depression or catatonia. With electroshock therapy, the anesthetized patient receives a direct electrical current, which causes a seizure. Typically, people who undergo this procedure experience some memory loss after treatment.

Transcranial magnetic stimulation is very different. It doesn’t require anesthesia, and it doesn’t affect memory. The patient can resume daily activities right after each treatment. Dormant brain connections are reignited without causing a seizure.

It should also not be confused with deep brain stimulation, which is a surgical procedure used to treat obsessive-compulsive disorder, tremors, epilepsy and Parkinson’s disease.

Side Effects and Access

Transcranial magnetic stimulation patients undergo a total of 36 treatments, at 19 minutes each, for three to six weeks. Research has concluded that this is the best protocol for treatment. Some patients report that it feels like someone is tapping on their head. Others don’t feel anything.

Some very minor side effects may occur. The most common is facial twitching and scalp discomfort during treatment, sensations that go away after the session ends. Some patients report a mild headache or discomfort at the application site. Depending on how effective the therapy was, some patients return for follow-ups every few weeks or months. It can be used in addition to medications, or with no medication at all.

Not everyone with depression can undergo this type of brain stimulation therapy. Those with epilepsy or a history of head injury may not qualify. People with metallic fillings in their teeth are OK for treatment, but others with implanted, nonremovable metallic devices in or around the head are not. Those with pacemakers, defibrillators and vagus nerve stimulators may also not qualify, because the magnetic force of the treatment coil may dislodge these devices and cause severe pain or injury.

But for those who are able to use the therapy, the results can be remarkable. For me, it is amazing to see these patients smile again – and come out on the other side feeling hopeful.

Scoring Geopolitical Points at World Cup 2022

Image Credit: Diego Sideburns (Flickr)

World Cup 2022: How Sponsorship has Become Less About Selling Drinks and More About Geopolitics

The Fifa men’s World Cup 2022 in Qatar is arguably the most political in history.

Even during the seemingly innocuous performance of South Korean pop star Jung Kook at the tournament’s opening ceremony, geopolitics were center stage. For Kook, 25, is not just a good-looking young man with a global fan base and a multi-million dollar fortune. In addition, he has a lucrative endorsement deal with the South Korean car maker Hyundai-Kia, which also happens to be a major Fifa sponsor.

This kind of relationship is neither an accident nor a simple business arrangement. For years, the South Korean government has been pursuing a strategy aimed at building and projecting “soft power”, developing its engagement with target audiences around the world. This has happened not just through football, music and cars, but also through Oscar winning films like Parasite and the massively popular TV series Squid Games.

And it’s not just South Korea taking advantage of the audiences that Fifa can provide. For while sellers of soft drinks and burgers are still part of the sponsorship roster, Fifa’s key partners are increasingly big corporations from countries keen to benefit from the global reach of football.

State-owned Qatar Airways for example, is busy selling plane tickets as Fifa’s official airline partner, but also plays a pivotal role in attempts by the Qatari government to establish Hamad International Airport as a major hub of global travel.

The award winning airline is an effective instrument of soft power, transmitting signals to global audiences about what Qatar is and what it aspires to be. In turn, the airline, and the very act of hosting the 2022 World Cup, are both illustrations of a nation intent on telling the world a particular story about itself – that it is a legitimate, trustworthy and important member of the international community.

The same applies to China, even though sporting and industrial progress have stalled somewhat since the pandemic. Its roster of four key World Cup sponsors featuring electronics (Hisense), mobile phones (Vivo), dairy products (Mengiu) and everything from property to media (Wanda) remains significant for a country hopeful of one day staging the tournament itself and a government keen to spread China’s influence around the world.

Rebels With a Cause

Alongside the World Cup’s main sponsors, a tradition has emerged of business competitors during tournament engaging in “ambush” marketing. This involves brands using the mega-event as a marketing tool without the considerable expense of an official link (Fifa is reportedly charging around US$100 million (£82 million) for a four-year sponsorship deal).

One notably successful ambush was perpetrated by Bavaria Beer’s provocative campaigns at the 2006 World Cup in Germany and again in 2010 in South Africa. These stunts involved equipping spectators with branded clothing, which was smuggled into stadiums. This gained huge global attention which was no doubt frustrating for the tournaments’ “official” beer, Budweiser.

Brightly colored ambush in 2010. EPA/STR

Yet even ambush marketing now appears to have become geopoliticised. For instance, during this World Cup, the authorities in nearby Dubai have been trying to draw attention away from Qatar with a tourism campaign featuring international football stars. The rival emirate will also be staging its own football tournament at the same time as the World Cup, featuring the likes of Liverpool, AC Milan and Arsenal.

And while in 2010, Bavaria Beer used women wearing orange dresses in its ambush, the UK-based brewer and pub chain BrewDog is trying to get in on this year’s action with its strident anti-World Cup marketing campaign.

Through a series of provocative billboards (in the UK), BrewDog is using references to autocracy, human rights abuses and corruption, all targeted at beer drinkers perturbed about Qatar’s staging of football’s biggest global event. While the bottom-line remains the same for BrewDog – to make a profit by selling beer – it is nevertheless contributing to the transformation of advertising and sponsorship from simple marketing to geopolitical posturing.

In a similar way, apparel brand Hummel has decided to hide its name and logos and the Danish football association’s badge from its kit. This is in protest against the treatment of migrant workers in Qatar and in support of LGBTQ+ communities.

In the company’s mission statement, Hummel emphasises its commitment to “Danishness” – and indeed, Denmark has been highly vocal in its condemnation of Qatar. Whenever the national team takes to the field, it will be in shirts that directly challenge the World Cup hosts.

So Qatar’s expensive ambitions in staging this tournament have come up against criticism and protest from countries and corporations alike. In 2022 it seems that football sponsorship is no longer just for kicks, or even customers. Everywhere you look, there are geopolitical points to be scored.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Simon Chadwick, Professor of Sport and Geopolitical Economy, SKEMA Business School.

Even Jeff Bezos Suggests Consumers Should Slow Spending, Can Holiday Spending Meet Expectations?

Image Credit: Anthony Quintano (Flickr)

Retailers May See More Red After Black Friday as Consumers Say They Plan to Pull Back on Spending

Retailers are gearing up for another blockbuster holiday shopping season, but consumers burned by the highest inflation in a generation may have other ideas.

Industry groups are predicting another record year of retail sales, with the National Retail Federation forecasting a jump of 6% to 8% over the US$890 billion consumers spent online and in stores in November and December of 2021.

But Jeff Bezos, founder and chairman of the biggest retailer of them all, seems to be anticipating a much less festive holiday for businesses. In November 2022, Amazon said it is laying off 10,000 workers, one of several big companies announcing job cuts recently. Bezos even cautioned consumers to hold off on big purchases like cars, televisions and appliances to save in case of a recession in 2023.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Ayalla A. Ruvio, Associate Professor of Marketing and the Director of the MS of Marketing Research program, Michigan State University and Forrest Morgeson, Assistant Professor of Marketing, Michigan State University.

Results from our new survey suggest consumers appear to be already taking Bezos’ advice, as a combination of soaring consumer prices, rising borrowing costs and growing odds of a recession weighs on their wallets. And if our survey results do pan out, it may mean the recession everyone’s worried about happens sooner than expected.

Crisis Behaviors

We conducted our survey in mid-November, about a week before Black Friday, the historical start of the holiday shopping season. The day after Thanksgiving is known as Black Friday because it signals the period when retailers hope to sell enough goods so that their income statement shows “black,” or profit, for the year rather than “red,” which refers to losses.

We asked over 500 consumers a series of questions about their spending plans, concerns and priorities during this year’s holiday season. Participants were split evenly between men and women, and almost two-thirds had a household income of $70,000 or less.

Overall, the most alarming conclusion from our research is that consumers are reporting consumption behaviors typically exhibited during an economic crisis, similar to those observed in 2009 by consultancy McKinsey during the Great Recession.

One data point stands out: An overwhelming 62% said they were concerned about their job security, while almost 35% indicated they were “very” or “extremely” worried about their financial situation.

Here are three behaviors we found in our survey that suggest consumers are behaving as if the U.S. economy is already in a recession.

1. Spending Less

Not surprisingly, cutting spending is the first thing consumers do during economic turmoil.

A study by McKinsey in early 2009 found that 90% of U.S. households cut spending due to the Great Recession, with 33% of consumers indicating a significant cut.

Similarly, respondents to our survey said they plan to spend, on average, around $700 this holiday season, substantially lower than the roughly $880 consumers spent during each of the past three seasons – including early in the pandemic in 2020.

About a third of our sample intended to spend “slightly” or “much” less than in 2021, while 35% said they would spend “about the same” – which from a retailer’s perspective means spending less because last year’s dollars don’t go as far today. The rest said they planned to spend a little or much more.

Inflation is one of the key reasons consumers say they are spending less. Almost 80% of respondents said they are either moderately, very or extremely concerned about the surge in prices, and 87% said those concerns would affect their holiday spending behavior, such as by buying gifts for fewer people or purchasing less expensive items.

Some of our respondents even said they were planning to make their own gifts or buy used goods, rather than shop for new items. The secondhand market has boomed  in the last few years, and many shoppers view this option as a way to combat inflationary pressures.

2. Planning Ahead

Another thing consumers do when they sense a troubled economy is they plan their purchases more carefully and maintain self-control over spending.

Common strategies include spending more time searching for the best deals, adhering to strict shopping lists, prioritizing necessities and making purchases earlier to spread out their spending – all of which were mentioned by our survey respondents.

We may already be seeing signs of this last strategy. Retail sales for October were up 1.3% from the previous month and up 8.3% from October 2021, which may reflect consumers’ early holiday shopping. If that is the case, this early shopping may result in slumping sales in December.

Also, purchasing early, aided by the plethora of steep discounts offered well in advance of Black Friday, allows consumers to control their shopping behavior better and reduces the risk of impulse buying. Reduction of impulse buying is a strong indicator that consumers are shopping like the economy is in recession.

In our survey, we found that over 50% of participants said that they would be using savings to cover the cost of holiday spending, with many stressing that they would pay with cash. Using cash as a primary form of payment is the main tool consumers have to control spending.

Only 15% of our respondents said that they would use buy-now-pay-later options, which to us is another sign that consumers are preferring cash over forms of credit that creates a new debt.

3. Hypersensitivity to Price

During economic crises, consumers become hypersensitive to prices, which trump most other considerations in the minds of consumers.

A whopping 90% of our respondents confirmed that price is their major consideration when shopping during the holidays this year. Other elements of price sensitivity are free shipping, product value and the level of discount, if any.

The singular focus of consumers on price gives retailers a wide range of potential responses, including promoting house brands and private labels that are perceived as having greater value for money. In fact, according to the 2009 McKinsey report, one of the biggest shifts in consumer behavior during and after the 2008 recession was the switch in preference from high-priced premium brands to value brands that tend to have lower prices but still decent quality. During an economic slowdown, consumers typically stop buying brands they are not strongly connected with or loyal to.

Consumers in our survey said buying brand names will be one of the least important influences on their purchases this season.

While economists debate whether a recession is coming, or even whether the U.S. is already in one, our data suggests consumers are beginning to behave like one is already here. That risks becoming a self-fulfilling prophecy as consumers tighten their belts.

Rail Worker Impasse Likely –  What’s Around the Next Turn?

Railroad Unions and Their Employers at an Impasse: Freight-Halting Strikes are Rare, and this Would be the First in 3 Decades

The prospect of a potentially devastating rail workers strike is looming again.

Fears of a strike in September 2022 prompted the Biden administration to pull out all the stops to get a deal between railroads and the largest unions representing their employees.

That deal hinged on ratification by a majority of members at all 12 of those unions. So far, eight have voted in favor, but four have rejected the terms. If even one continues to reject the deal after further negotiations, it could mean a full-scale freight strike will start as soon as midnight on Dec. 5, 2022. Any work stoppage by conductors and engineers would surely interfere with the delivery of gifts and other items Americans will want to receive in time for the holiday season, along with coal, lumber and other key commodities.

Strikes that obstruct transportation rarely occur in the United States, and the last one involving rail workers happened three decades ago. But when these workers do walk off the job, it can thrash the economy, inconveniencing millions of people and creating a large-scale crisis.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Erik Loomis, Professor of History, University of Rhode Island.

I’m a labor historian who has studied the history of American strikes. I believe that with the U.S. teetering toward at least a mild recession and some of the supply chain disruptions that arose at the outset of the COVID-19 pandemic still wreaking havoc, I don’t think the administration would accept a rail strike for long.

19th Century Rail Strikes

Few, if any, workers have more power over the economy than transportation workers. Their ability to shut down the entire economy has often led to heavy retaliation from the government when they have tried to exercise that power.

In 1877, a small strike against a West Virginia railroad that had cut wages spread. It grew into what became known as the Great Railroad Strike, a general rebellion against railroads that brought thousands of unemployed workers into the streets.

Seventeen years later, in 1894, the American Railway Union went on strike in solidarity with the Pullman Sleeping Car company workers who had gone on strike due to their boss lowering wages while maintaining rents on their company housing.

In both cases, the threat of a railroad strike led the federal government to call out the military to crush the labor actions. Dozens of workers died.

Once those dramatic clashes ended, for more than a century rail unions have played a generally quiet role, preferring to focus on the needs of their members and avoiding most broader social and political questions. Fearful of more rail strikes, the government passed the Railway Labor Act of 1926, which gives Congress the power to intervene before a rail strike starts.

Breaking the Air Traffic Controllers Union

With travel by road and air growing in importance in the 20th century, other transportation workers also engaged in actions that could shut down the economy.

The Professional Air Traffic Controllers Association walked off the job in 1981 after a decade of increased militancy over the stress and conditions of their job. The union had engaged in a series of slowdowns through the 1970s, delaying airplanes and frustrating passengers.

When it went on strike in 1981, the union broke the law, as federal workers do not have the right to strike. That’s when President Ronald Reagan became the first modern U.S. leader to retaliate against striking transportation workers. Two days after warning the striking workers that they would lose their jobs unless they returned to work, Reagan fired more than 11,000 of them. He also banned them from ever being rehired.

In the aftermath of Reagan’s actions, the number of strikes by U.S. workers plummeted. Rail unions engaged in brief strikes in both 1991 and 1992, but Congress used the Railway Labor Act to halt them, ordering workers back on the job and imposing a contract upon the workers.

In 1992, Congress passed another measure that forced a system of arbitration upon railroad workers before a strike – that took power away from workers to strike.

New Era of Labor Militancy

Following decades of decline in the late 20th century, U.S. labor organizing has surged in recent years.

Most notably, unionization attempts at Starbucks and Amazon have led to surprising successes against some of the biggest corporations in the country. Teachers’ unions around the nation have also held a series of successful strikes everywhere from Los Angeles to West Virginia.

United Parcel Service workers, who held the nation’s last major transportation strike, in 1997, may head back to the picket lines after their contract expires in June 2023. UPS workers, members of the Teamsters union, are angry over a two-tiered system that pays newer workers lower wages, and they are also demanding greater overtime protections.

But rail workers, angered by their employers’ refusal to offer sick leave and other concerns, may go on strike first.

Rail companies have greatly reduced the number of people they employ on freight trains as part of their efforts to maximize profits and take advantage of technological progress. They generally keep the size of crews limited to only two per train.

Many companies want to pare back their workforce further, saying that it can be safe to have crews consisting of a single crew member on freight trains. The unions reject this arrangement, saying that lacking a second set of eyes would be a recipe for mistakes, accidents and disasters.

The deal the Biden administration brokered in September would raise annual pay by 24% over several years, raising the average pay for rail workers to $110,000 by 2024. But strikes are often about much more than wages. The companies have also long refused to provide paid sick leave or to stop demanding that their workers have inflexible and unpredictable schedules.

The Biden administration had to cajole the rail companies into offering a single personal day, while workers demanded 15 days of sick leave. Companies had offered zero. The agreement did remove penalties from workers who took unpaid sick or family leave, but this would still leave a group of well-paid workers whose daily lives are filled with stress and fear.

What Lies Ahead

Seeing highly paid workers threaten to take action that would surely compound strains on supply chains at a time when inflation is at a four-decade high may not win rail unions much public support.

A coalition representing hundreds of business groups has called for government intervention to make sure freight trains keep moving, and it’s highly likely that Congress will again impose a decision on workers under the Railway Labor Act. The Biden administration, which has shown significant sympathy to unions, has resisted supporting such a step so far.

No one should expect the military to intervene like it did in the 19th century. But labor law remains tilted toward companies, and I believe that if the government were to compel striking rail workers back on the job, the move might find a receptive audience.

Scientists Uncover a Surprise in the Function of Essential Genes 

Image Credit: National Human Research Institute (Flickr)

Scientists Unveil the Functional Landscape of Essential Genes

Nicole Davis | Whitehead Institute

A team of scientists at the Whitehead Institute for Biomedical Research and the Broad Institute of MIT and Harvard has systematically evaluated the functions of over 5,000 essential human genes using a novel, pooled, imaged-based screening method. Their analysis harnesses CRISPR-Cas9 to knock out gene activity and forms a first-of-its-kind resource for understanding and visualizing gene function in a wide range of cellular processes with both spatial and temporal resolution. The team’s findings span over 31 million individual cells and include quantitative data on hundreds of different parameters that enable predictions about how genes work and operate together. The new study appears in the Nov. 7 online issue of the journal Cell.

“For my entire career, I’ve wanted to see what happens in cells when the function of an essential gene is eliminated,” says MIT Professor Iain Cheeseman, who is a senior author of the study and a member of Whitehead Institute. “Now, we can do that, not just for one gene but for every single gene that matters for a human cell dividing in a dish, and it’s enormously powerful. The resource we’ve created will benefit not just our own lab, but labs around the world.”

Systematically disrupting the function of essential genes is not a new concept, but conventional methods have been limited by various factors, including cost, feasibility, and the ability to fully eliminate the activity of essential genes. Cheeseman, who is the Herman and Margaret Sokol Professor of Biology at MIT, and his colleagues collaborated with MIT Associate Professor Paul Blainey and his team at the Broad Institute to define and realize this ambitious joint goal. The Broad Institute researchers have pioneered a new genetic screening technology that marries two approaches — large-scale, pooled, genetic screens using CRISPR-Cas9 and imaging of cells to reveal both quantitative and qualitative differences. Moreover, the method is inexpensive compared to other methods and is practiced using commercially available equipment.

“We are proud to show the incredible resolution of cellular processes that are accessible with low-cost imaging assays in partnership with Iain’s lab at the Whitehead Institute,” says Blainey, a senior author of the study, an associate professor in the Department of Biological Engineering at MIT, a member of the Koch Institute for Integrative Cancer Research at MIT, and a core institute member at the Broad Institute. “And it’s clear that this is just the tip of the iceberg for our approach. The ability to relate genetic perturbations based on even more detailed phenotypic readouts is imperative, and now accessible, for many areas of research going forward.”

Cheeseman adds, “The ability to do pooled cell biological screening just fundamentally changes the game. You have two cells sitting next to each other and so your ability to make statistically significant calculations about whether they are the same or not is just so much higher, and you can discern very small differences.”

Cheeseman, Blainey, lead authors Luke Funk and Kuan-Chung Su, and their colleagues evaluated the functions of 5,072 essential genes in a human cell line. They analyzed four markers across the cells in their screen — DNA; the DNA damage response, a key cellular pathway that detects and responds to damaged DNA; and two important structural proteins, actin and tubulin. In addition to their primary screen, the scientists also conducted a smaller, follow-up screen focused on some 200 genes involved in cell division (also called “mitosis”). The genes were identified in their initial screen as playing a clear role in mitosis but had not been previously associated with the process. These data, which are made available via a companion website, provide a resource for other scientists to investigate the functions of genes they are interested in.

“There’s a huge amount of information that we collected on these cells. For example, for the cells’ nucleus, it is not just how brightly stained it is, but how large is it, how round is it, are the edges smooth or bumpy?” says Cheeseman. “A computer really can extract a wealth of spatial information.”

Flowing from this rich, multi-dimensional data, the scientists’ work provides a kind of cell biological “fingerprint” for each gene analyzed in the screen. Using sophisticated computational clustering strategies, the researchers can compare these fingerprints to each other and construct potential regulatory relationships among genes. Because the team’s data confirms multiple relationships that are already known, it can be used to confidently make predictions about genes whose functions and/or interactions with other genes are unknown.

There are a multitude of notable discoveries to emerge from the researchers’ screening data, including a surprising one related to ion channels. Two genes, AQP7 and ATP1A1, were identified for their roles in mitosis, specifically the proper segregation of chromosomes. These genes encode membrane-bound proteins that transport ions into and out of the cell. “In all the years I’ve been working on mitosis, I never imagined ion channels were involved,” says Cheeseman.

He adds, “We’re really just scratching the surface of what can be unearthed from our data. We hope many others will not only benefit from — but also build upon — this resource.”

This work was supported by grants from the U.S. National Institutes of Health as well as support from the Gordon and Betty Moore Foundation, a National Defense Science and Engineering Graduate Fellowship, and a Natural Sciences and Engineering Research Council Fellowship.

A team of scientists at the Whitehead Institute for Biomedical Research and the Broad Institute of MIT and Harvard has systematically evaluated the functions of over 5,000 essential human genes using a novel, pooled, imaged-based screening method. Their analysis harnesses CRISPR-Cas9 to knock out gene activity and forms a first-of-its-kind resource for understanding and visualizing gene function in a wide range of cellular processes with both spatial and temporal resolution. The team’s findings span over 31 million individual cells and include quantitative data on hundreds of different parameters that enable predictions about how genes work and operate together. The new study appears in the Nov. 7 online issue of the journal Cell.

“For my entire career, I’ve wanted to see what happens in cells when the function of an essential gene is eliminated,” says MIT Professor Iain Cheeseman, who is a senior author of the study and a member of Whitehead Institute. “Now, we can do that, not just for one gene but for every single gene that matters for a human cell dividing in a dish, and it’s enormously powerful. The resource we’ve created will benefit not just our own lab, but labs around the world.”

Systematically disrupting the function of essential genes is not a new concept, but conventional methods have been limited by various factors, including cost, feasibility, and the ability to fully eliminate the activity of essential genes. Cheeseman, who is the Herman and Margaret Sokol Professor of Biology at MIT, and his colleagues collaborated with MIT Associate Professor Paul Blainey and his team at the Broad Institute to define and realize this ambitious joint goal. The Broad Institute researchers have pioneered a new genetic screening technology that marries two approaches — large-scale, pooled, genetic screens using CRISPR-Cas9 and imaging of cells to reveal both quantitative and qualitative differences. Moreover, the method is inexpensive compared to other methods and is practiced using commercially available equipment.

“We are proud to show the incredible resolution of cellular processes that are accessible with low-cost imaging assays in partnership with Iain’s lab at the Whitehead Institute,” says Blainey, a senior author of the study, an associate professor in the Department of Biological Engineering at MIT, a member of the Koch Institute for Integrative Cancer Research at MIT, and a core institute member at the Broad Institute. “And it’s clear that this is just the tip of the iceberg for our approach. The ability to relate genetic perturbations based on even more detailed phenotypic readouts is imperative, and now accessible, for many areas of research going forward.”

Cheeseman adds, “The ability to do pooled cell biological screening just fundamentally changes the game. You have two cells sitting next to each other and so your ability to make statistically significant calculations about whether they are the same or not is just so much higher, and you can discern very small differences.”

Cheeseman, Blainey, lead authors Luke Funk and Kuan-Chung Su, and their colleagues evaluated the functions of 5,072 essential genes in a human cell line. They analyzed four markers across the cells in their screen — DNA; the DNA damage response, a key cellular pathway that detects and responds to damaged DNA; and two important structural proteins, actin and tubulin. In addition to their primary screen, the scientists also conducted a smaller, follow-up screen focused on some 200 genes involved in cell division (also called “mitosis”). The genes were identified in their initial screen as playing a clear role in mitosis but had not been previously associated with the process. These data, which are made available via a companion website, provide a resource for other scientists to investigate the functions of genes they are interested in.

“There’s a huge amount of information that we collected on these cells. For example, for the cells’ nucleus, it is not just how brightly stained it is, but how large is it, how round is it, are the edges smooth or bumpy?” says Cheeseman. “A computer really can extract a wealth of spatial information.”

Flowing from this rich, multi-dimensional data, the scientists’ work provides a kind of cell biological “fingerprint” for each gene analyzed in the screen. Using sophisticated computational clustering strategies, the researchers can compare these fingerprints to each other and construct potential regulatory relationships among genes. Because the team’s data confirms multiple relationships that are already known, it can be used to confidently make predictions about genes whose functions and/or interactions with other genes are unknown.

There are a multitude of notable discoveries to emerge from the researchers’ screening data, including a surprising one related to ion channels. Two genes, AQP7 and ATP1A1, were identified for their roles in mitosis, specifically the proper segregation of chromosomes. These genes encode membrane-bound proteins that transport ions into and out of the cell. “In all the years I’ve been working on mitosis, I never imagined ion channels were involved,” says Cheeseman.

He adds, “We’re really just scratching the surface of what can be unearthed from our data. We hope many others will not only benefit from — but also build upon — this resource.”

This work was supported by grants from the U.S. National Institutes of Health as well as support from the Gordon and Betty Moore Foundation, a National Defense Science and Engineering Graduate Fellowship, and a Natural Sciences and Engineering Research Council Fellowship.

Reprinted with permission from MIT News” ( http://news.mit.edu/ )

Does Cheaper Government Debt Crowd Out Liquidity?

Will Global Rate Hikes Set Off a Global Debt Bomb?

The higher levels of risky corporate debt issuance over the past few year will need to be refinanced between 2023 and 2025, In numbers terms,  there will be over $10 trillion of the riskiest debt at much higher interest rates and with less liquidity. In addition to domestic high yield issuance, the majority of the major European economies have issued negative-yielding debt over the past three years and must now refinance at significantly higher rates. In 2020–21.  the annual increase in the US money supply (M2) was 27 percent, more than 2.5 times higher than the quantitative easing peak of 2009 and the highest level since 1960. Negative yielding bonds, an economic anomaly that should have set off alarm bells as an example of a bubble worse than the “subprime” bubble, amounted to over $12 trillion. Even if refinancing occurs smoothly but at higher costs, the impact on new credit and innovation will be enormous, and the crowding out effect of government debt absorbing the majority of liquidity and the zombification of the already indebted will result in weaker growth and decreased productivity in the future.

Raising interest rates is a necessary but insufficient measure to combat inflation. To reduce inflation to 2 percent, central banks must significantly reduce their balance sheets, which has not yet occurred in local currency, and governments must reduce spending, which is highly unlikely.

The most challenging obstacle is also the accumulation of debt.

The so-called expansionary policies have not been an instrument for reducing debt, but rather for increasing it. In the second quarter of 2022, according to the Institute of International Finance (IIF), the global debt-to-GDP ratio will approach 350 percent of GDP. IIF anticipates that the global debt-to-GDP ratio will reach 352 percent by the end of 2022.

Global issuances of high-yield debt have slowed but remain elevated. According to the IMF, the total issuance of European and American high-yield bonds reached a record high of $1,6 trillion in 2021, as businesses and investors capitalized on still low interest rates and high liquidity. According to the IMF, high-yield bond issuances in the United States and Europe will reach $700 billion in 2022, similar to 2008 levels. All of the risky debt accumulated over the past few years will need to be refinanced between 2023 and 2025, requiring the refinancing of over $10 trillion of the riskiest debt at much higher interest rates and with less liquidity.

Moody’s estimates that United States corporate debt maturities will total $785 billion in 2023 and $800 billion in 2024. This increases the maturities of the Federal government. The United States has $31 trillion in outstanding debt with a five-year average maturity, resulting in $5 trillion in refinancing needs during fiscal 2023 and a $2 trillion budget deficit. Knowing that the federal debt of the United States will be refinanced increases the risk of crowding out and liquidity stress on the debt market.

According to The Economist, the cumulative interest bill for the United States between 2023 and 2027 should be less than 3 percent of GDP, which appears manageable. However, as a result of the current path of rate hikes, this number has increased, which exacerbates an already unsustainable fiscal problem.

If you think the problem in the United States is significant, the situation in the eurozone is even worse. Governments in the euro area are accustomed to negative nominal and real interest rates. The majority of the major European economies have issued negative-yielding debt over the past three years and must now refinance at significantly higher rates. France and Italy have longer average debt maturities than the United States, but their debt and growing structural deficits are also greater. Morgan Stanley estimates that, over the next two years, the major economies of the eurozone will require a total of $3 trillion in refinancing.

Although at higher rates, governments will refinance their debt. What will become of businesses and families? If quantitative tightening is added to the liquidity gap, a credit crunch is likely to ensue. However, the issue is not rate hikes but excessive debt accumulation complacency.

Explaining to citizens that negative real interest rates are an anomaly that should never have been implemented is challenging. Families may be concerned about the possibility of a higher mortgage payment, but they are oblivious to the fact that house prices have skyrocketed due to risk accumulation caused by excessively low interest rates.

The magnitude of the monetary insanity since 2008 is enormous, but the glut of 2020 was unprecedented. Between 2009 and 2018, we were repeatedly informed that there was no inflation, despite the massive asset inflation and the unjustified rise in financial sector valuations. This is inflation, massive inflation. It was not only an overvaluation of financial assets, but also a price increase for irreplaceable goods and services. The FAO food index reached record highs in 2018, as did the housing, health, education, and insurance indices. Those who argued that printing money without control did not cause inflation, however, continued to believe that nothing was wrong until 2020, when they broke every rule.

In 2020–21, the annual increase in the US money supply (M2) was 27 percent, more than 2.5 times higher than the quantitative easing peak of 2009 and the highest level since 1960. Negative yielding bonds, an economic anomaly that should have set off alarm bells as an example of a bubble worse than the “subprime” bubble, amounted to over $12 trillion. But statism was pleased because government bonds experienced a bubble. Statism always warns of bubbles in everything except that which causes the government’s size to expand.

In the eurozone, the increase in the money supply was the greatest in its history, nearly three times the Draghi-era peak. Today, the annualized rate is greater than 6 percent, remaining above Draghi’s “bazooka.” All of this unprecedented monetary excess during an economic shutdown was used to stimulate public spending, which continued after the economy reopened … And inflation skyrocketed. However, according to Lagarde, inflation appeared “out of nowhere.”

No, inflation is not caused by commodities, war, or “disruptions in the supply chain.” Wars are deflationary if the money supply remains constant. Several times between 2008 and 2018, the value of commodities rose sharply, but they do not cause all prices to rise simultaneously. If the amount of currency issued remains unchanged, supply chain issues do not affect all prices. If the money supply remains the same, core inflation does not rise to levels not seen in thirty years.

All of the excess of unproductive debt issued during a period of complacency will exacerbate the problem in 2023 and 2024. Even if refinancing occurs smoothly but at higher costs, the impact on new credit and innovation will be enormous, and the crowding out effect of government debt absorbing the majority of liquidity and the zombification of the already indebted will result in weaker growth and decreased productivity in the future.

About the Author:

Daniel Lacalle, PhD, economist and fund manager, is the author of the bestselling books Freedom or Equality (2020), Escape from the Central Bank Trap (2017), The Energy World Is Flat (2015), and Life in the Financial Markets (2014).

FTX, What Happened and Should Non-Crypto Investors Care

Image Credit: Phillip Pessar (Flickr)

Dramatic Collapse of the Cryptocurrency Exchange FTX Contains Lessons for Investors but Won’t Affect Most People

In the fast-paced world of cryptocurrency, vast sums of money can be made or lost in the blink of an eye. In early November 2022, the second-largest cryptocurrency exchange, FTX, was valued at more than US$30 billion. By Nov. 14, FTX was in bankruptcy proceedings along with more than 100 companies connected to it. D. Brian Blank and Brandy Hadley are professors who study finance, investing and fintech. They explain how and why this incredible collapse happened, what effect it might have on the traditional financial sector and whether you need to care if you don’t own any cryptocurrency.

What Happened?

In 2019, Sam Bankman-Fried founded FTX, a company that ran one of the largest cryptocurrency exchanges.

FTX is where many crypto investors trade and hold their cryptocurrency, similar to the New York Stock Exchange for stocks. Bankman-Fried is also the founder of Alameda Research, a hedge fund that trades and invests in cryptocurrencies and crypto companies.

Sam Bankman-Fried founded both FTX and the investment firm Alameda Research. News sources have reported some less-than-responsible financial dealings between the two companies. Image via The Conversation.

Within the traditional financial sector, these two companies would be separate firms entirely or at least have divisions and firewalls in place between them. But in early November 2022, news outlets reported that a significant proportion of Alameda’s assets were a type of cryptocurrency released by FTX itself.

A few days later, news broke that FTX had allegedly been loaning customer assets to Alameda for risky trades without the consent of the customers and also issuing its own FTX cryptocurrency for Alameda to use as collateral. As a result, criminal and regulatory investigators began scrutinizing FTX for potentially violating securities law.

These two pieces of news basically led to a bank run on FTX.

Large crypto investors, like FTX’s competitor Binance, as well as individuals, began to sell off cryptocurrency held on FTX’s exchange. FTX quickly lost its ability to meet customer withdrawals and halted trading. On Nov. 14, FTX was also hit by an apparent insider hack and lost $600 million worth of cryptocurrency.

That same day, FTX, Alameda Research and 130 other affiliated companies founded by Bankman-Fried filed for bankruptcy. This action may leave more than a million suppliers, employees and investors who bought cryptocurrencies through the exchange or invested in these companies with no way to get their money back.

Among the groups and individuals who held currency on the FTX platform were many of the normal players in the crypto world, but a number of more traditional investment firms also held assets within FTX. Sequoia Capital, a venture capital firm, as well as the Ontario Teacher’s Pension, are estimated to have held millions of dollars of their investment portfolios in ownership stake of FTX. They have both already written off these investments with FTX as lost.

Image: OTPP

Did a Lack of Oversight Play a Role?

In traditional markets, corporations generally limit the risk they expose themselves to by maintaining liquidity and solvency. Liquidity is the ability of a firm to sell assets quickly without those assets losing much value. Solvency is the idea that a company’s assets are worth more than what that company owes to debtors and customers.

But the crypto world has generally operated with much less caution than the traditional financial sector, and FTX is no exception. About two-thirds of the money that FTX owed to the people who held cryptocurrency on its exchange – roughly $11.3 billion of $16 billion owed – was backed by illiquid coins created by FTX. FTX was taking its customers’ money, giving it to Alameda to make risky investments and then creating its own currency, known as FTT, as a replacement – cryptocurrency that it was unable to sell at a high enough price when it needed to.

In addition, nearly 40% of Alameda’s assets were in FTX’s own cryptocurrency – and remember, both companies were founded by the same person.

This all came to a head when investors decided to sell their coins on the exchange. FTX did not have enough liquid assets to meet those demands. This, in turn, drove the value of FTT from over $26 a coin at the beginning of November to under $2 by Nov. 13. By this point, FTX owed more money to its customers than it was worth.

In regulated exchanges, investing with customer funds is illegal. Additionally, auditors validate financial statements, and firms must publish the amount of money they hold in reserve that is available to fund customer withdrawals. And even if things go wrong, the Securities Investor Protection Corporation – or SIPC – protects depositors against the loss of investments from an exchange failure or financially troubled brokerage firm. None of these guardrails are in place within the crypto world.

Why is this a Big Deal in Crypto?

As a result of this meltdown, the company Binance is now considering creating an industry recovery fund – akin to a private version of SIPC insurance – to avoid future failures of crypto exchanges.

But while the collapse of FTX and Alameda – valued at more than $30 billion and now essentially worth nothing – is dramatic, the bigger implication is simply the potential lost trust in crypto. Bank runs are rare in traditional financial institutions, but they are increasingly common in the crypto space. Given that Bankman-Fried and FTX were seen as some of the biggest, most trusted figures in crypto, these events may lead more investors to think twice about putting money in crypto.

If I Don’t Own Crypto, Should I Care?

Though investment in cryptocurrencies has grown rapidly, the entire crypto market – valued at over $3 trillion at its peak – is much smaller than the $120 trillion traditional stock market.

While investors and regulators are still evaluating the consequences of this fall, the impact on any person who doesn’t personally own crypto will be minuscule. It is true that many larger investment funds, like BlackRock and the Ontario Teachers Pension, held investments in FTX, but the estimated $95 million the Ontario Teachers Pension lost through the collapse of FTX is just 0.05% of the entire fund’s investments.

The takeaway for most individuals is not to invest in unregulated markets without understanding the risks. In high-risk environments like crypto, it’s possible to lose everything – a lesson investors in FTX are learning the hard way.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of D. Brian Blank, Assistant Professor of Finance, Mississippi State University and Brandy Hadley, Associate Professor of Finance and the David A. Thompson Professor in Applied Investments, Appalachian State University

Is Rising Unemployment Good for the Economy?

Image Credit: Anna Shvets (Pexels)

Rising Unemployment: Economists Sometimes Say it’s Good for the Economy, But Are They Right?

interest rates are up, which means that projections for growth are down. Put simply, the proverbial something is close to hitting the fan.

Business closures and job losses are likely to become another hurdle for the global economy – and that points to rising unemployment. Yet, while most people would think of rising unemployment as a bad thing, some economists don’t entirely agree.

Economists have long pointed to a counterintuitive positive relationship between unemployment and entrepreneurship, born of the fact that people who lose their job often start businesses. This is often referred to within economic literature as necessity-based or push-factor entrepreneurship.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Daragh O’Leary, PhD Researcher in Economics, University College Cork

Where it Gets Tricky

There is certainly good evidence for the existence of this contradictory relationship. The graph below shows the rates of UK business creation in blue and unemployment in red. As you can see, unemployment started to increase during the global financial crisis of 2007-09 and business creation followed not long after.

UK new business creation and unemployment, 2006-2020

This relationship between business creation and unemployment has previously been used by some as a justification for cold social policies towards the unemployed on the rationale that “the market fixes itself” in the long run. They see business closures and job losses not as human miseries that require government help, but necessary evils that are needed to reallocate the money, people and other resources back into the economy in more efficient ways .

But my latest research has found that rising unemployment is not quite the silver bullet for reigniting the economic engine that it’s cracked up to be. I looked at 148 regions across Europe from 2008 to 2017. Although I did find evidence that unemployment can stimulate business creation over time, this only seems to happen in higher performing regions within higher performing economies such as the Netherlands, Finland and Austria.

In lower performing regions within lower performing economies such as Bulgaria, Romania and Hungary, the relationship between unemployment and business creation actually appears to be negative. In other words, rather than inducing business creation, unemployment simply seems to lead to more unemployment.

The reason why higher performing regions in wealthier areas have a positive relationship between job losses and business creation is that they enjoy what are known as “urbanisation economies”. These are positive benefits derived from the scale and density of economic activity occurring within that area, including wider arrays of services, greater pools of customers and greater numbers of transactions relative to other areas of the economy.

For example, a firm located in a capital city like London will benefit from more abundant access to consumers, suppliers and lenders as well as larger labour pools. The higher population density in these areas also makes it more likely that firms and workers will learn faster as they observe the activities of their many neighbours. In more peripheral areas with fewer of these characteristics, the opposite is true. This is why unemployment affects different places differently.

What it Means

One consequence is that economists need to stop explaining how economies perform differently based solely on national factors. And it’s not just unemployment where this becomes apparent. For example, Ireland’s longstanding low rate of corporation tax (12.5%) has been cited as a reason for its high foreign direct investment, which accounts for roughly 20% of private sector employment.

Yet while just over 43% of all Irish enterprises in 2020 were located in either Dublin or Cork, counties like Leitrim in the north accounted for fewer than 1% of enterprises. So while national measures can help induce entrepreneurship and increase the overall size of the pie, the pie is shared very unequally. Just as rising unemployment can benefit some areas while hindering others, the same is true of government interventions.

Rural areas like County Leitrim have benefited far less from Ireland’s low corporation tax than more urbanised regions further south. Julia Gavin/Alamy

We therefore need to stop viewing the free market and government intervention as either wrong or right. In some contexts one is going to be more helpful, while in other contexts it will be the opposite. Recognizing this reality would improve on much of the polarized debate in politics and economics, in which those on the right can come across as cold and ignorant, while those on the left can seem self-righteous and sanctimonious, viewing capitalism and markets as dirty words.

How does this apply to today’s gathering downturn? It would make sense for governments to prioritize supporting businesses in more peripheral regions, while leaving those in wealthier urban areas to fend for themselves.

The famous economist John Kenneth Galbraith gave what I believe to be one of the best pieces of commentary on this topic, saying:

Where the market works, I’m for that. Where government is necessary, I’m for that … I’m in favor of whatever works in the particular case.

If we are to survive this upcoming recession and get things going again, we are going to need to acknowledge that centralized “one-size-fits-all” policies won’t be useful everywhere. The solutions to economic recovery are in some cases government intervention and in others the free market, but not always one or the other.

Telomeres and New Findings on Cancer Mortality

Image Credit: Steve Jurvetson (Flickr)

How Cancer Cells can Become Immortal – New Research Finds a Mutated Gene that Helps Melanoma Defeat the Normal Limits on Repeated Replication

A defining characteristic of cancer cells is their immortality. Usually, normal cells are limited in the number of times they can divide before they stop growing. Cancer cells, however, can overcome this limitation to form tumors and bypass “mortality” by continuing to replicate.

Telomeres play an essential role in determining how many times a cell can divide. These repetitive sequences of DNA are located at the ends of chromosomes, structures that contain genetic information. In normal cells, continued rounds of replication shorten telomeres until they become so short that they eventually trigger the cell to stop replicating. In contrast, tumor cells can maintain the lengths of their telomeres by activating an enzyme called telomerase that rebuilds telomeres during each replication.

Telomeres are protective caps at the ends of chromosomes

Telomerase is encoded by a gene called TERT, one of the most frequently mutated genes in cancer. TERT mutations cause cells to make a little too much telomerase and are thought to help cancer cells keep their telomeres long even though they replicate at high rates. Melanoma, an aggressive form of skin cancer, is highly dependent on telomerase to grow, and three-quarters of all melanomas acquire mutations in telomerase. These same TERT mutations also occur across other cancer types.

Unexpectedly, researchers found that TERT mutations could only partially explain the longevity of telomeres in melanoma. While TERT mutations did indeed extend the life span of cells, they did not make them immortal. That meant there must be something else that helps telomerase allow cells to grow uncontrollably. But what that “second hit” might be has been unclear.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Pattra Chun-On Ph.D. Candidate in Environmental and Occupational Health, University of Pittsburgh Health Sciences and Jonathan Alder Assistant Professor of Medicine, University of Pittsburgh Health Sciences.

We are researchers who study the role telomeres play in human health and diseases like cancer in the Alder Lab at the University of Pittsburgh. While investigating the ways that tumors maintain their telomeres, we and our colleagues found another piece to the puzzle: another telomere-associated gene in melanoma.

Cell Immortality Gets a Boost

Our team focused on melanoma because this type of cancer is linked to people with long telomeres. We examined DNA sequencing data from hundreds of melanomas, looking for mutations in genes related to telomere length.

We identified a cluster of mutations in a gene called TPP1. This gene codes for one of the six proteins that form a molecular complex called shelterin that coats and protects telomeres. Even more interesting is the fact that TPP1 is known to activate telomerase. Identifying the TPP1 gene’s connection to cancer telomeres was, in a way, obvious. After all, it was more than a decade ago that researchers showed that TPP1 would increase telomerase activity.

We tested whether having an excess of TPP1 could make cells immortal. When we introduced just TPP1 proteins into cells, there was no change in cell mortality or telomere length. But when we introduced TERT and TPP1 proteins at the same time, we found that they worked synergistically to cause significant telomere lengthening.

To confirm our hypothesis, we then inserted TPP1 mutations into melanoma cells using CRISPR-Cas9 genome editing. We saw an increase in the amount of TPP1 protein the cells made, and a subsequent increase in telomerase activity. Finally, we returned to the DNA sequencing data and found that 5% of all melanomas have a mutation in both TERT and TPP1. While this is still a significant proportion of melanomas, there are likely other factors that contribute to telomere maintenance in this cancer.

Our findings imply that TPP1 is likely one of the missing puzzle pieces that boost telomerase’s capacity to maintain telomeres and support tumor growth and immortality.

Making Cancer Mortal

Knowing that cancer use these genes in their replication and growth means that researchers could also block them and potentially stop telomeres from lengthening and make cancer cells mortal. This discovery not only gives scientists another potential avenue for cancer treatment but also draws attention to an underappreciated class of mutations outside the traditional boundaries of genes that can play a role in cancer diagnostics.