Will Scientific Research and Technological Innovation Be Stifled By Expiring Agreement?

Image: President Jimmy Carter and Chinese Vice Premier Deng Xiaoping meet outside of the Oval Office on Jan. 30, 1979

The US and China May Be Ending an Agreement on Science and Technology Cooperation − A Policy Expert Explains What This Means for Research

A decades-old science and technology cooperative agreement between the United States and China expires this week. On the surface, an expiring diplomatic agreement may not seem significant. But unless it’s renewed, the quiet end to a cooperative era may have consequences for scientific research and technological innovation.

The possible lapse comes after U.S. Rep. Mike Gallagher, R-Wis., led a congressional group warning the U.S. State Department in July 2023 to beware of cooperation with China. This group recommended to let the agreement expire without renewal, claiming China has gained a military advantage through its scientific and technological ties with the U.S.

The State Department has dragged its feet on renewing the agreement, only requesting an extension at the last moment to “amend and strengthen” the agreement.

The U.S. is an active international research collaborator, and since 2011 China has been its top scientific partner, displacing the United Kingdom, which had been the U.S.‘s most frequent collaborator for decades. China’s domestic research and development spending is closing in on parity with that of the United States. Its scholastic output is growing in both number and quality. According to recent studies, China’s science is becoming increasingly creative, breaking new ground.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Caroline Wagner, Professor of Public Affairs, The Ohio State University.

As a policy analyst and public affairs professor, I research international collaboration in science and technology and its implications for public policy. Relations between countries are often enhanced by negotiating and signing agreements, and this agreement is no different. The U.S.’s science and technology agreement with China successfully built joint research projects and shared research centers between the two nations.

U.S. scientists can typically work with foreign counterparts without a political agreement. Most aren’t even aware of diplomatic agreements, which are signed long after researchers have worked together. But this is not the case with China, where the 1979 agreement became a prerequisite for and the initiator of cooperation.

In 1987 former President Jimmy Carter visited Yangshuo, his wife Rosalyn and he insisted that went around Yangshuo countryside by bicycle.

A 40-Year Diplomatic Investment

The U.S.-China science and technology agreement was part of a historic opening of relations between the two countries, following decades of antagonism and estrangement. U.S. President Richard Nixon set in motion the process of normalizing relations with China in the early 1970s. President Jimmy Carter continued to seek an improved relationship with China.

China had announced reforms, modernizations and a global opening after an intense period of isolation from the time of the Cultural Revolution from the late 1950s until the early 1970s. Among its “four modernizations” was science and technology, in addition to agriculture, defense and industry.

While China is historically known for inventing gunpowder, paper and the compass, China was not a scientific power in the 1970s. American and Chinese diplomats viewed science as a low-conflict activity, comparable to cultural exchange. They figured starting with a nonthreatening scientific agreement could pave the way for later discussions on more politically sensitive issues.

On July 28, 1979, Carter and Chinese Premier Deng Xiaoping signed an “umbrella agreement” that contained a general statement of intent to cooperate in science and technology, with specifics to be worked out later.

In the years that followed, China’s economy flourished, as did its scientific output. As China’s economy expanded, so did its investment in domestic research and development. This all boosted China’s ability to collaborate in science – aiding their own economy.

Early collaboration under the 1979 umbrella agreement was mostly symbolic and based upon information exchange, but substantive collaborations grew over time.

A major early achievement came when the two countries published research showing mothers could ingest folic acid to prevent birth defects like spina bifida in developing embryos. Other successful partnerships developed renewable energy, rapid diagnostic tests for the SARS virus and a solar-driven method for producing hydrogen fuel.

Joint projects then began to emerge independent of government agreements or aid. Researchers linked up around common interests – this is how nation-to-nation scientific collaboration thrives.

Many of these projects were initiated by Chinese Americans or Chinese nationals working in the United States who cooperated with researchers back home. In the earliest days of the COVID-19 pandemic, these strong ties led to rapid, increased Chinese-U.S. cooperation in response to the crisis.

Time of Conflict

Throughout the 2000s and 2010s, scientific collaboration between the two countries increased dramatically – joint research projects expanded, visiting students in science and engineering skyrocketed in number and collaborative publications received more recognition.

As China’s economy and technological success grew, however, U.S. government agencies and Congress began to scrutinize the agreement and its output. Chinese know-how began to build military strength and, with China’s military and political influence growing, they worried about intellectual property theft, trade secret violations and national security vulnerabilities coming from connections with the U.S.

Recent U.S. legislation, such as the CHIPS and Science Act, is a direct response to China’s stunning expansion. Through the CHIPS and Science Act, the U.S. will boost its semiconductor industry, seen as the platform for building future industries, while seeking to limit China’s access to advances in AI and electronics.

A Victim of Success?

Some politicians believe this bilateral science and technology agreement, negotiated in the 1970s as the least contentious form of cooperation – and one renewed many times – may now threaten the United States’ dominance in science and technology. As political and military tensions grow, both countries are wary of renewal of the agreement, even as China has signed similar agreements with over 100 nations.

The United States is stuck in a world that no longer exists – one where it dominates science and technology. China now leads the world in research publications recognized as high quality work, and it produces many more engineers than the U.S. By all measures, China’s research spending is soaring.

Even if the recent extension results in a renegotiated agreement, the U.S. has signaled to China a reluctance to cooperate. Since 2018, joint publications have dropped in number. Chinese researchers are less willing to come to the U.S. Meanwhile, Chinese researchers who are in the U.S. are increasingly likely to return home taking valuable knowledge with them.

The U.S. risks being cut off from top know-how as China forges ahead. Perhaps looking at science as a globally shared resource could help both parties craft a truly “win-win” agreement.

Do Regional Federal Reserve Branches Put Banks in Their Region at Risk?

The Fed Is Losing Tens of Billions: How Are Individual Federal Reserve Banks Doing?

The Federal Reserve System as of the end of July 2023 has accumulated operating losses of $83 billion and, with proper, generally accepted accounting principles applied, its consolidated retained earnings are negative $76 billion, and its total capital negative $40 billion. But the System is made up of 12 individual Federal Reserve Banks (FRBs). Each is a separate corporation with its own shareholders, board of directors, management and financial statements. The commercial banks that are the shareholders of the Fed actually own shares in the particular FRB of which they are a member, and receive dividends from that FRB. As the System in total puts up shockingly bad numbers, the financial situations of the individual FRBs are seldom, if ever, mentioned. In this article we explore how the individual FRBs are doing.

All 12 FRBs have net accumulated operating losses, but the individual FRB losses range from huge in New York and really big in Richmond and Chicago to almost breakeven in Atlanta. Seven FRBs have accumulated losses of more than $1 billion. The accumulated losses of each FRB as of July 26, 2023 are shown in Table 1.

Table 1: Accumulated Operating Losses of Individual Federal Reserve Banks

New York ($55.5 billion)

Richmond ($11.2 billion )

Chicago ( $6.6 billion )

San Francisco ( $2.6 billion )

Cleveland ( $2.5 billion )

Boston ( $1.6 billion )

Dallas ( $1.4 billion )

Philadelphia ($688 million)

Kansas City ($295 million )

Minneapolis ($151 million )

St. Louis ($109 million )

Atlanta ($ 13 million )

The FRBs are of very different sizes. The FRB of New York, for example, has total assets of about half of the entire Federal Reserve System. In other words, it is as big as the other 11 FRBs put together, by far first among equals. The smallest FRB, Minneapolis, has assets of less than 2% of New York. To adjust for the differences in size, Table 2 shows the accumulated losses as a percent of the total capital of each FRB, answering the question, “What percent of its capital has each FRB lost through July 2023?” There is wide variation among the FRBs. It can be seen that New York is also first, the booby prize, in this measure, while Chicago is a notable second, both having already lost more than three times their capital. Two additional FRBs have lost more than 100% of their capital, four others more than half their capital so far, and two nearly half. Two remain relatively untouched.

Table 2: Accumulated Losses as a Percent of Total Capital of Individual FRBs

New York 373%

Chicago 327%

Dallas 159%

Richmond 133%

Boston 87%

Kansas City 64%

Cleveland 56%

Minneapolis 56%

San Francisco 48%

Philadelphia 46%

St. Louis 11%

Atlanta 1%

Thanks to statutory formulas written by a Congress unable to imagine that the Federal Reserve could ever lose money, let alone lose massive amounts of money, the FRBs maintained only small amounts of retained earnings, only about 16% of their total capital. From the percentages in Table 2 compared to 16%, it may be readily observed that the losses have consumed far more than the retained earnings in all but two FRBs. The GAAP accounting principle to be applied is that operating losses are a subtraction from retained earnings. Unbelievably, the Federal Reserve claims that its losses are instead an intangible asset. But keeping books of the Federal Reserve properly, 10 of the FRBs now have negative retained earnings, so nothing left to pay out in dividends.

On orthodox principles, then, 10 of the 12 FRBs would not be paying dividends to their shareholders. But they continue to do so. Should they?

Much more striking than negative retained earnings is negative total capital. As stated above, properly accounted for, the Federal Reserve in the aggregate has negative capital of $40 billion as of July 2023. This capital deficit is growing at the rate of about $ 2 billion a week, or over $100 billion a year. The Fed urgently wants you to believe that its negative capital does not matter. Whether it does or what negative capital means to the credibility of a central bank can be debated, but the big negative number is there. It is unevenly divided among the individual FRBs, however.

With proper accounting, as is also apparent from Table 2, four of the FRBs already have negative total capital. Their negative capital in dollars shown in Table 3.

Table 3: Federal Reserve Banks with Negative Capital as of July 2023

New York ($40.7 billion)

Chicago ($ 4.6 billion )

Richmond ($ 2.8 billion )

Dallas ($514 million )

In these cases, we may even more pointedly ask: With negative capital, why are these banks paying dividends?

In six other FRBs, their already shrunken capital keeps on being depleted by continuing losses. At the current rate, they will have negative capital within a year, and in 2024 will face the same fundamental question.

What explains the notable differences among the various FRBs in the extent of their losses and the damage to their capital? The answer is the large difference in the advantage the various FRBs enjoy by issuing paper currency or dollar bills, formally called “Federal Reserve Notes.” Every dollar bill is issued by and is a liability of a particular FRB, and the FRBs differ widely in the proportion of their balance sheets funded by paper currency.

The zero-interest cost funding provided by Federal Reserve Notes reduces the need for interest-bearing funding. All FRBs are invested in billions of long-term fixed-rate bonds and mortgage securities yielding approximately 2%, while they all pay over 5% for their deposits and borrowed funds—a surefire formula for losing money. But they pay 5% on smaller amounts if they have more zero-cost paper money funding their bank. In general, more paper currency financing reduces an FRB’s operating loss, and a smaller proportion of Federal Reserve Notes in its balance sheet increases its loss. The wide range of Federal Reserve Notes as a percent of various FRBs’ total liabilities, a key factor in Atlanta’s small accumulated losses and New York’s huge ones, is shown in Table 4.

Table 4: Federal Reserve Notes Outstanding as a Percent of Total Liabilities

Atlanta 64%

St. Louis 60%

Minneapolis 58%

Dallas 51%

Kansas City 50%

Boston 45%

Philadelphia 44%

San Francisco 39%

Cleveland 38%

Chicago 26%

Richmond 23%

New York 17%

The Federal Reserve System was originally conceived not as a unitary central bank, but as 12 regional reserve banks. It has evolved a long way toward being a unitary organization since then, but there are still 12 different banks, with different balance sheets, different shareholders, different losses, and different depletion or exhaustion of their capital. Should it make a difference to a member bank shareholder which particular FRB it owns stock in? The authors of the Federal Reserve Act thought so.

About the Author

Alex J. Pollock is a Senior Fellow at the Mises Institute, and is the co-author of Surprised Again! — The Covid Crisis and the New Market Bubble (2022). Previously he served as the Principal Deputy Director of the Office of Financial Research in the U.S. Treasury Department (2019-2021), Distinguished Senior Fellow at the R Street Institute (2015-2019 and 2021), Resident Fellow at the American Enterprise Institute (2004-2015), and President and CEO, Federal Home Loan Bank of Chicago (1991-2004). He is the author of Finance and Philosophy—Why We’re Always Surprised (2018).

As BRICS Cooperation Accelerates, Is It Time for the US to Develop a BRICS Policy?

Image: External Affairs Ministers at BRICS foreign ministers meeting, MEA Photogallery (Flickr)

An Expansion of BRICS Countries Would Increase its Negotiating Strength

When leaders of the BRICS group of large emerging economies – Brazil, Russia, India, China and South Africa – meet in Johannesburg for two days beginning on Aug. 22, 2023, foreign policymakers in Washington will no doubt be listening carefully.

The BRICS group has been challenging some key tenets of U.S. global leadership in recent years. On the diplomatic front, it has undermined the White House’s strategy on Ukraine by countering the Western use of sanctions on Russia. Economically, it has sought to chip away at U.S. dominance by weakening the dollar’s role as the world’s default currency.

And now the group is looking at expanding, with 23 formal candidates. Such a move – especially if BRICS accepts Iran, Cuba or Venezuela – would likely strengthen the group’s anti-U.S. positioning.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Mihaela Papa, Senior Fellow, The Fletcher School, Tufts University, Frank O’Donnell, Adjunct Lecturer in the International Studies Program, Boston College, Zhen Han, Assistant Professor of Global Studies, Sacred Heart University.

So what can Washington expect next, and how can it respond?

Our research team at Tufts University has been working on a multiyear Rising Power Alliances project that has analyzed the evolution of BRICS and the group’s relationship with the U.S. What we have found is that the common portrayal of BRICS as a China-dominated group primarily pursuing anti-U.S. agendas is misplaced.

Rather, the BRICS countries connect around common development interests and a quest for a multipolar world order in which no single power dominates. Yet BRICS consolidation has turned the group into a potent negotiation force that now challenges Washington’s geopolitical and economic goals. Ignoring BRICS as a major policy force – something the U.S. has been prone to do in the past – is no longer an option.

Reining in the America bashing

At the dawn of BRIC cooperation in 2008 – before South Africa joined in 2010, adding an “S” – members were mindful that the group’s existence could lead to tensions with policymakers who viewed the U.S. as the world’s “indispensable nation.”

As Brazil’s former Foreign Minister Celso Amorim observed at the time, “We should promote a more democratic world order by ensuring the fullest participation of developing countries in decision-making bodies.” He saw BRIC countries “as a bridge between industrialized and developing countries for sustainable development and a more balanced international economic policy.”

While such realignments would certainly dilute U.S. power, BRIC explicitly refrained from anti-U.S. rhetoric.

After the 2009 BRIC summit, the Chinese foreign ministry clarified that BRIC cooperation should not be “directed against a third party.” Indian Foreign Secretary Shivshankar Menon had already confirmed that there would be no America bashing at BRIC and directly rejected China’s and Russia’s efforts to weaken the dollar’s dominance.

Rather, the new entity complemented existing efforts toward multipolarity – including China-Russia cooperation and the India, Brazil, South Africa trilateral dialogue. Not only was BRIC envisioned as a forum for ideas rather than ideologies, but it also planned to stay open and transparent.

BRICS alignment and tensions with the US

Today, BRICS is a formidable group – it accounts for 41% of the world’s population, 31.5% of global gross domestic product and 16% of global trade. As such, it has a lot of bargaining power if the countries act together – which they increasingly do. During the Ukraine war, Moscow’s BRICS partners have ensured Russia’s economic and diplomatic survival in the face of Western attempts to isolate Moscow. Brazil, India, China and South Africa engaged with Russia in 166 BRICS events in 2022. And some members became crucial export markets for Russia.

The group’s political development – through which it has continually added new areas of cooperation and extra “bodies” – is impressive, considering the vast differences among its members.

We designed a BRICS convergence index to measure how BRICS states converged around 47 specific policies between 2009 and 2021, ranging from economics and security to sustainable development. We found deepening convergence and cooperation across these issues and particularly around industrial development and finance.

But BRICS convergence does not necessarily lead to greater tension with the United States. Our data finds limited divergence between the joint policies of BRICS and that of the U.S. on a wide range of issues. Our research also counters the argument that BRICS is China-driven. Indeed, China has been unable to advance some key policy proposals. For example, since the 2011 BRICS summit, China has sought to establish a BRICS free trade agreement but could not get support from other states. And despite various trade coordination mechanisms in BRICS, the overall trade among BRICS remains low – only 6% of the countries’ combined trade.

However, tensions between the United States and BRICS exist, especially when BRICS turns “bloc-like” and when U.S. global interests are at stake. The turning point for this was 2015, when BRICS achieved major institutional growth under Russia’s presidency. This coincided with Moscow enhancing its pivot to China and BRICS following Western sanctions over Russia’s annexation of Crimea in 2014. Russia was eager to develop alternatives to Western-led institutional and market mechanisms it could no longer benefit from.

That said, important champions of BRICS convergence are also close strategic partners to the U.S. For example, India has played a major role in strengthening the security dimension of BRICS cooperation, championing a counter-terrorism agenda that has drawn U.S. opposition due to its vague definition of terrorist actors.

Further constraints on U.S. power may emerge from BRICS transitioning to using local currencies over the dollar and encouraging BRICS candidate countries to do the same. Meanwhile, China and Russia’s efforts to engage BRICS on outer space governance is another trend for policymakers in Washington to watch.

Toward a US BRICS Policy?

So where does a more robust – and potentially larger – BRICS leave the U.S.?

To date, U.S. policy has largely ignored BRICS as an entity. The U.S. foreign and defense policymaking apparatus is regionally oriented. In the past 20 years, it has pivoted from the Middle East to Asia and most recently to the Indo-Pacific region.

When it comes to the BRICS nations, Washington has focused on developing bilateral relations with Brazil, India and South Africa, while managing tensions with China and isolating Russia. The challenge for the Biden administration is understanding how, as a group, BRICS’ operations and institutions affect U.S. global interests.

Meanwhile, BRICS expansion raises new questions. When asked about U.S. partners such as Algeria and Egypt wanting to join BRICS, the Biden administration explained that it does not ask partners to choose between the United States and other countries.

But the international demand for joining BRICS calls for a deeper reflection on how Washington pursues foreign policy.

Designing a BRICS-focused foreign policy is an opportunity for the United States to innovate around addressing development needs. Rather than dividing countries into friendly democracies and others, a BRICS-focused policy can see the Biden administration lead on universal development issues and build development-focused, close relationships that encourage a better alignment between countries of the Global South and the United States.

It could also allow the Biden administration to deepen cooperation with India, Brazil, South Africa and some of the new BRICS candidates. Areas of focus could include issues where the BRICS countries have struggled to coordinate their policy, such as AI development and governance, energy security and global restrictions on chemical and biological weapons.

Developing a BRICS policy could help re-imagine U.S. foreign policy and ensure that the United States is well positioned in a multipolar world.

AI models Are Powerful, But are They Biologically Plausible?

Machine Learning Offers Insights Into Any Role of Astrocytes in the Human Brain

Adam Zewe | MIT News

Artificial neural networks, ubiquitous machine-learning models that can be trained to complete many tasks, are so called because their architecture is inspired by the way biological neurons process information in the human brain.

About six years ago, scientists discovered a new type of more powerful neural network model known as a transformer. These models can achieve unprecedented performance, such as by generating text from prompts with near-human-like accuracy. A transformer underlies AI systems such as ChatGPT and Bard, for example. While incredibly effective, transformers are also mysterious: Unlike with other brain-inspired neural network models, it hasn’t been clear how to build them using biological components.

Now, researchers from MIT, the MIT-IBM Watson AI Lab, and Harvard Medical School have produced a hypothesis that may explain how a transformer could be built using biological elements in the brain. They suggest that a biological network composed of neurons and other brain cells called astrocytes could perform the same core computation as a transformer.

Recent research has shown that astrocytes, non-neuronal cells that are abundant in the brain, communicate with neurons and play a role in some physiological processes, like regulating blood flow. But scientists still lack a clear understanding of what these cells do computationally.

With the new study, published this week in open-access format in the Proceedings of the National Academy of Sciences, the researchers explored the role astrocytes play in the brain from a computational perspective, and crafted a mathematical model that shows how they could be used, along with neurons, to build a biologically plausible transformer.

Their hypothesis provides insights that could spark future neuroscience research into how the human brain works. At the same time, it could help machine-learning researchers explain why transformers are so successful across a diverse set of complex tasks.

“The brain is far superior to even the best artificial neural networks that we have developed, but we don’t really know exactly how the brain works. There is scientific value in thinking about connections between biological hardware and large-scale artificial intelligence networks. This is neuroscience for AI and AI for neuroscience,” says Dmitry Krotov, a research staff member at the MIT-IBM Watson AI Lab and senior author of the research paper.

Joining Krotov on the paper are lead author Leo Kozachkov, a postdoc in the MIT Department of Brain and Cognitive Sciences; and Ksenia V. Kastanenka, an assistant professor of neurobiology at Harvard Medical School and an assistant investigator at the Massachusetts General Research Institute. 

A Biological Impossibility Becomes Plausible

Transformers operate differently than other neural network models. For instance, a recurrent neural network trained for natural language processing would compare each word in a sentence to an internal state determined by the previous words. A transformer, on the other hand, compares all the words in the sentence at once to generate a prediction, a process called self-attention.

For self-attention to work, the transformer must keep all the words ready in some form of memory, Krotov explains, but this didn’t seem biologically possible due to the way neurons communicate.

However, a few years ago scientists studying a slightly different type of machine-learning model (known as a Dense Associated Memory) realized that this self-attention mechanism could occur in the brain, but only if there were communication between at least three neurons.

“The number three really popped out to me because it is known in neuroscience that these cells called astrocytes, which are not neurons, form three-way connections with neurons, what are called tripartite synapses,” Kozachkov says.

When two neurons communicate, a presynaptic neuron sends chemicals called neurotransmitters across the synapse that connects it to a postsynaptic neuron. Sometimes, an astrocyte is also connected — it wraps a long, thin tentacle around the synapse, creating a tripartite (three-part) synapse. One astrocyte may form millions of tripartite synapses.

The astrocyte collects some neurotransmitters that flow through the synaptic junction. At some point, the astrocyte can signal back to the neurons. Because astrocytes operate on a much longer time scale than neurons — they create signals by slowly elevating their calcium response and then decreasing it — these cells can hold and integrate information communicated to them from neurons. In this way, astrocytes can form a type of memory buffer, Krotov says.

“If you think about it from that perspective, then astrocytes are extremely natural for precisely the computation we need to perform the attention operation inside transformers,” he adds.

Building a Neuron-Astrocyte Network

With this insight, the researchers formed their hypothesis that astrocytes could play a role in how transformers compute. Then they set out to build a mathematical model of a neuron-astrocyte network that would operate like a transformer.

They took the core mathematics that comprise a transformer and developed simple biophysical models of what astrocytes and neurons do when they communicate in the brain, based on a deep dive into the literature and guidance from neuroscientist collaborators.

Then they combined the models in certain ways until they arrived at an equation of a neuron-astrocyte network that describes a transformer’s self-attention.

“Sometimes, we found that certain things we wanted to be true couldn’t be plausibly implemented. So, we had to think of workarounds. There are some things in the paper that are very careful approximations of the transformer architecture to be able to match it in a biologically plausible way,” Kozachkov says.

Through their analysis, the researchers showed that their biophysical neuron-astrocyte network theoretically matches a transformer. In addition, they conducted numerical simulations by feeding images and paragraphs of text to transformer models and comparing the responses to those of their simulated neuron-astrocyte network. Both responded to the prompts in similar ways, confirming their theoretical model.

“Having remained electrically silent for over a century of brain recordings, astrocytes are one of the most abundant, yet less explored, cells in the brain. The potential of unleashing the computational power of the other half of our brain is enormous,” says Konstantinos Michmizos, associate professor of computer science at Rutgers University, who was not involved with this work. “This study opens up a fascinating iterative loop, from understanding how intelligent behavior may truly emerge in the brain, to translating disruptive hypotheses into new tools that exhibit human-like intelligence.”

The next step for the researchers is to make the leap from theory to practice. They hope to compare the model’s predictions to those that have been observed in biological experiments, and use this knowledge to refine, or possibly disprove, their hypothesis.

In addition, one implication of their study is that astrocytes may be involved in long-term memory, since the network needs to store information to be able act on it in the future. Additional research could investigate this idea further, Krotov says.

“For a lot of reasons, astrocytes are extremely important for cognition and behavior, and they operate in fundamentally different ways from neurons. My biggest hope for this paper is that it catalyzes a bunch of research in computational neuroscience toward glial cells, and in particular, astrocytes,” adds Kozachkov.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

Lab-Grown ‘Ghost Hearts’ Work to Solve Organ Transplant Shortage

A ‘ghost heart’ is a pig’s heart prepared so that it can be transplanted into people. Provided by Doris Taylor

Combining a Cleaned-Out Pig Heart with a Patient’s Own Stem Cells

Heart disease is the leading cause of death worldwide. The World Health Organization estimates that 17.9 million people lose their lives to it each year, accounting for 32% of global deaths.

Doris Taylor is a scientist working in regenerative medicine and tissue engineering. Her work has focused on creating personalized functioning human hearts in a lab that could rule out the need for donors. Taylor has dubbed these hearts “ghost hearts.” This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts like Doris Taylor, Regenerative Medicine Lecturer at the University of New Hampshire.

What are the biggest challenges facing organ donations today?

Currently, patients in need of a heart transplant need to join a waitlist, and hearts become available when someone else has died. Because there are not enough hearts to go around, only the very sick are put on the waitlist. The U.S. transplants about 11 hearts a day, and on a given day there are more than 3,000 people waiting for a heart.

Even when organs are successfully transplanted, it isn’t a Hollywood fairy-tale ending. A person receiving an organ transplant essentially trades one disease for other medical complications and diseases. Toxic drugs necessary to prevent rejection can cause high blood pressure diabetes, cancer and kidney failure. These are serious medical issues that also affect people emotionally, financially and physically.

About 18% of people die in the first year after a transplant.

What is the so-called “ghost heart”? How does it work?

The ghost heart is a heart whose cells have been removed. All that remains is the heart framework, or scaffolding. It’s called a ghost heart because removing the cells causes the heart to turn from red to white. A human heart wouldn’t work as a scaffold because so few are available to work with.

So my team and I went with the next best thing: a pig heart. Pig hearts are similar to human hearts in terms of their size and structure. Both have four chambers – two atria and two ventricles – responsible for pumping blood. And structures from pig hearts such as valves have been used in humans safely.

To remove the cells, the pig heart is gently washed through its blood vessels with a mild detergent to remove the cells. This process is called perfusion decellurization. The cell-free heart can then be seeded with new cells – in this case, a patient’s cells – thus forming a personalized heart.

Doris Taylor speaks at the 2023 Imagine Solutions Conference.

What role do stem cells play in creating a heart?

If you lined up the cells needed for an average-size 350-gram human heart, they would stretch for 41,000 miles. Stacked on top of one another, they would amount to 2 billion lines of cells, or enough to fill seven movie screens. But heart cells don’t divide. If they did, hearts could likely repair themselves.

Stem cells, on the other hand, do divide. They can also form into specialized cells – in this case, heart cells. Nobel Prize laureate Dr. Shinya Yamanaka discovered a method to make stem cells out of blood or skin cells from an adult. My team and I employed this method to obtain stem cells, then grew those cells into billions. After that, the team used chemicals to “differentiate” them into heart cells. We employed this method to obtain billions and billions of heart cells.

The first time I saw heart cells beating in a dish it was life-changing. But while the cells are alive and beat, they are not a heart. To be a heart, these cells need to be placed into a form that lets them become a unified organ, to mature and to be able to pump blood. In a human body, this happens during development; we had to reproduce that capacity in the lab.

In 2022, a pig heart that had been genetically engineered to reduce rejection and improve acceptance was transplanted into a human. Why is it better to build a heart from scratch using pig scaffolding instead?

Let me be clear: Any heart is better than no heart. And xenotransplantation – the process by which nonhuman animal organs are transplanted into humans – opened doors for all scientists in this field.

The patient received a pig heart that had been gene-edited. Human genes were added, and some pig genes were removed, but the heart still essentially comprised pig cells within a pig scaffold. As a result, the individual had to take anti-rejection drugs that suppressed the immune system. And, unbeknownst to doctors, the heart was carrying a pig virus that ultimately killed the patient two months following the transplant.

I believe these sorts of problems are avoided with the ghost heart. My team removes the pig cellular material from the scaffold, leaving only the protein structure and blood vessel channels behind. The proteins are so similar to human scaffold proteins they don’t appear to cause rejection.

AI Model Can Help Determine Where a Patient’s Cancer Arose

Prediction Model Could Enable Targeted Treatments for Difficult Tumors

Anne Trafton | MIT News

For a small percentage of cancer patients, doctors are unable to determine where their cancer originated. This makes it much more difficult to choose a treatment for those patients, because many cancer drugs are typically developed for specific cancer types.

A new approach developed by researchers at MIT and Dana-Farber Cancer Institute may make it easier to identify the sites of origin for those enigmatic cancers. Using machine learning, the researchers created a computational model that can analyze the sequence of about 400 genes and use that information to predict where a given tumor originated in the body.

Using this model, the researchers showed that they could accurately classify at least 40 percent of tumors of unknown origin with high confidence, in a dataset of about 900 patients. This approach enabled a 2.2-fold increase in the number of patients who could have been eligible for a genomically guided, targeted treatment, based on where their cancer originated.

“That was the most important finding in our paper, that this model could be potentially used to aid treatment decisions, guiding doctors toward personalized treatments for patients with cancers of unknown primary origin,” says Intae Moon, an MIT graduate student in electrical engineering and computer science who is the lead author of the new study.

Mysterious Origins

In 3 to 5 percent of cancer patients, particularly in cases where tumors have metastasized throughout the body, oncologists don’t have an easy way to determine where the cancer originated. These tumors are classified as cancers of unknown primary (CUP).

This lack of knowledge often prevents doctors from being able to give patients “precision” drugs, which are typically approved for specific cancer types where they are known to work. These targeted treatments tend to be more effective and have fewer side effects than treatments that are used for a broad spectrum of cancers, which are commonly prescribed to CUP patients.

“A sizeable number of individuals develop these cancers of unknown primary every year, and because most therapies are approved in a site-specific way, where you have to know the primary site to deploy them, they have very limited treatment options,” Gusev says.

Moon, an affiliate of the Computer Science and Artificial Intelligence Laboratory who is co-advised by Gusev, decided to analyze genetic data that is routinely collected at Dana-Farber to see if it could be used to predict cancer type. The data consist of genetic sequences for about 400 genes that are often mutated in cancer. The researchers trained a machine-learning model on data from nearly 30,000 patients who had been diagnosed with one of 22 known cancer types. That set of data included patients from Memorial Sloan Kettering Cancer Center and Vanderbilt-Ingram Cancer Center, as well as Dana-Farber.

The researchers then tested the resulting model on about 7,000 tumors that it hadn’t seen before, but whose site of origin was known. The model, which the researchers named OncoNPC, was able to predict their origins with about 80 percent accuracy. For tumors with high-confidence predictions, which constituted about 65 percent of the total, its accuracy rose to roughly 95 percent.

After those encouraging results, the researchers used the model to analyze a set of about 900 tumors from patients with CUP, which were all from Dana-Farber. They found that for 40 percent of these tumors, the model was able to make high-confidence predictions.

The researchers then compared the model’s predictions with an analysis of the germline, or inherited, mutations in a subset of tumors with available data, which can reveal whether the patients have a genetic predisposition to develop a particular type of cancer. The researchers found that the model’s predictions were much more likely to match the type of cancer most strongly predicted by the germline mutations than any other type of cancer.

Guiding Drug Decisions

To further validate the model’s predictions, the researchers compared data on the CUP patients’ survival time with the typical prognosis for the type of cancer that the model predicted. They found that CUP patients who were predicted to have cancer with a poor prognosis, such as pancreatic cancer, showed correspondingly shorter survival times. Meanwhile, CUP patients who were predicted to have cancers that typically have better prognoses, such as neuroendocrine tumors, had longer survival times.

Another indication that the model’s predictions could be useful came from looking at the types of treatments that CUP patients analyzed in the study had received. About 10 percent of these patients had received a targeted treatment, based on their oncologists’ best guess about where their cancer had originated. Among those patients, those who received a treatment consistent with the type of cancer that the model predicted for them fared better than patients who received a treatment typically given for a different type of cancer than what the model predicted for them.

Using this model, the researchers also identified an additional 15 percent of patients (2.2-fold increase) who could have received an existing targeted treatment, if their cancer type had been known. Instead, those patients ended up receiving more general chemotherapy drugs.

“That potentially makes these findings more clinically actionable because we’re not requiring a new drug to be approved. What we’re saying is that this population can now be eligible for precision treatments that already exist,” Gusev says.

The researchers now hope to expand their model to include other types of data, such as pathology images and radiology images, to provide a more comprehensive prediction using multiple data modalities. This would also provide the model with a comprehensive perspective of tumors, enabling it to predict not just the type of tumor and patient outcome, but potentially even the optimal treatment.

Alexander Gusev, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, is the senior author of the paper, which appeared on August 7, 2023, in Nature Medicine.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

Why T Cells Fail to Eliminate Cancer Cells

A cytotoxic T cell (blue) attacks a cancer cell (green) by releasing toxic chemicals (red). Alex Ritter and Jennifer Lippincott Schwartz and Gillian Griffiths/National Institutes of Health via Flickr

 Immune Cells that Fight Cancer Become Exhausted Within Hours of First Encountering Tumors

A key function of our immune system is to detect and eliminate foreign pathogens such as bacteria and viruses. Immune cells like T cells do this by distinguishing between different types of proteins within cells, which allows them to detect the presence of infection or disease.

A type of T cell called cytotoxic T cells can recognize the mutated proteins on cancer cells and should therefore be able to kill them. However, in most patients, cancer cells grow unchecked despite the presence of T cells.

The current explanation scientists have as to why T cells fail to eliminate cancer cells is because they become “exhausted.” The idea is that T cells initially function well when they first face off against cancer cells, but gradually lose their ability to kill the cancer cells after repeated encounters.

Cancer immunotherapies such as immune checkpoint inhibitors and CAR-T cell therapy have shown remarkable promise by inducing long-lasting remission in some patients with otherwise incurable cancers. However, these therapies often fail to induce long-term responses in most patients, and T cell exhaustion is a major culprit.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Mary Philip, Assistant Professor of Medicine and Pathology, Vanderbilt University and Michael Rudloff, MD-Ph.D. Candidate in Molecular Pathology and Immunology, Vanderbilt University.

We are researchers who study ways to harness the immune system to treat cancer. Scientists like us have been working to determine the mechanisms controlling how well T cells function against tumors. In our newly published research, we found that T cells become exhausted within hours after encountering cancer cells.

Timing T Cell Exhaustion

By the time most patients are diagnosed with cancer, their immune system has been interacting with developing cancer cells for months to years. We wanted to go back earlier in time to figure out what happens when T cells first encounter tumor cells.

To do this, we used mice genetically engineered to develop liver cancers as they age, similarly to how liver cancers develop in people. We introduced trackable cytotoxic T cells that specifically recognize liver cancer cells to analyze the T cells’ function and monitor which of the genes are activated or turned off over time.

We also used these same trackable T cells to study their response in mice infected with the bacteria Listeria. In these mice, we found that the T cells were highly functional and eliminated infected cells. By comparing the differences between dysfunctional T cells from tumors and highly functional T cells from infected mice, we can home in on the genes that code for critical proteins that T cells use to regulate their function.

In our previous work, we found that T cells become dysfunctional with dramatically altered genetic structure within five days of encountering cancer cells in mice. We had originally decided to focus on the very earliest time points after T cells encounter cancer cells in mice with liver cancer or metastatic melanoma because we thought there would be fewer genetic changes. That would have allowed us to identify the earliest and most critical regulators of T cell dysfunction.

Instead, we found multiple surprising hallmarks of T cell dysfunction within six to 12 hours after they encountered cancer cells, including thousands of changes in genetic structure and gene expression.

T cells play an important role in fighting against disease. National Institute of Allergy and Infectious Diseases

We analyzed the different regulatory genes and pathways in T cells encountering cancer cells compared to those of T cells encountering infected cells. We found that genes associated with inflammation were highly activated in T cells interacting with infected cells but not in T cells interacting with cancer cells.

Next, we looked at how the initial early changes to the genetic structure of T cells evolved over time. We found that very early DNA changes were stabilized and reinforced with continued exposure to cancer cells, effectively “imprinting” dysfunctional gene expression patterns in the T cells. This meant that when the T cells were removed from the tumors after five days and transferred to tumor-free mice, they still remained dysfunctional.

Boosting T Cell Killing

Altogether, our research suggests that T cells in tumors are not necessarily working hard and getting exhausted. Rather, they are blocked right from the start. This is because the negative signals cancer cells send out to their surrounding environment induce T cell dysfunction, and a lack of positive signals like inflammation results in a failure to kick T cells into high gear.

Our team is now exploring strategies to stimulate inflammatory pathways in T cells encountering cancer cells to make them function as though they are encountering an infection. Our hope is that this will help T cells kill their cancer targets more effectively.

What Will It Take for Cryptocurrencies to Become Full-Fledged Money?

Can a Currency Without a Country Survive?

The crypto-unit bitcoin holds out the prospect of something revolutionary: money created in the free market, money the production and use of which the state has no access to. The transactions carried out with it are anonymous; outsiders do not know who paid and who received the payment. It would be money that cannot be multiplied at will, whose quantity is finite, that knows no national borders, and that can be used unhindered worldwide. This is possible because the bitcoin is based on a special form of electronic data processing and storage: blockchain technology (a “distributed ledger technology,” DLT), which can also be described as a decentralized account book.

Think through the consequences if such a “denationalized” form of money should actually prevail in practice. The state can no longer tax its citizens as before. It lacks information on the labor and capital incomes of citizens and enterprises and their total wealth. The only option left to the state is to tax the assets in the “real world”—such as houses, land, works of art, etc. But this is costly and expensive. It could try to levy a “poll tax”: a tax in which everyone pays the same absolute tax amount—regardless of the personal circumstances of the taxpayers, such as income, wealth, ability, to achieve and so on. But would that be practicable? Could it be enforced? This is doubtful.

The state could also no longer simply borrow money. In a cryptocurrency world, who would give credit to the state? The state would have to justify the expectation that it would use the borrowed money productively to service its debt. But as we know, the state is not in a position to do this or is in a much worse position than private companies. So even if the state could obtain credit, it would have to pay a comparatively high interest rate, severely restricting its scope for credit financing.

In view of the financial disempowerment of the state by a cryptocurrency, the question arises: Could the state as we know it today still exist at all, could it still mobilize enough supporters and gather them behind it? After all, the fantasies of redistribution and enrichment that today drive many people as voters into the arms of political parties and ideologies would disappear into thin air. The state would no longer function as a redistribution machine; it basically would have little or no money to finance political promises. Cryptocurrencies therefore have the potential to herald the end of the state as we know it today.

The transition from the national fiat currencies to a cryptocurrency created in the free market has, above all, consequences for the existing fiat monetary system and the production and employment structure it has created. Suppose a cryptocurrency (C) rises in the favor of money demanders. It is increasingly in demand and therefore appreciates against the established fiat currency (F). If the prices of goods, calculated in F, remain unchanged, the holder of C records an increase in his purchasing power: one obtains more F for C and can purchase more goods, provided that the prices of goods, calculated in F, remain unchanged.

Since C has now appreciated compared to F, the prices of the goods expressed in F must also rise sooner or later—otherwise the holder of C could arbitrate by exchanging C for F and then paying the prices of the goods labeled in F. And because more and more people want to use C as money, goods prices will soon be labeled not only in F, but also in C. When money users increasingly turn away from F because they see C as the better money, the purchasing power devaluation of F continues. Because F is an unbacked currency, in extreme cases it can lose its purchasing power and become a total loss.

The decline in the purchasing power of F will have far-reaching consequences for the production and employment structure of the economy. It leads to an increase in market interest rates for loans denominated in F. Investments that have so far seemed profitable turn out to be a flop. Companies cut jobs. Debtors whose loans become due have problems obtaining follow-up loans and become insolvent. The boom provided by the fiat currencies collapses and turns into a bust. If the central banks accompany this bust with an expansion of the money supply, the exchange rate of the fiat currencies against the cryptocurrency will fall even further. The purchasing power of the sight, time, and savings deposits and bonds denominated in fiat currencies would be lost; in the event of loan defaults, creditors could only hope to be (partially) compensated by the collateral values, if any.

However, the bitcoin has not yet developed to the point where it could be a perfect substitute for the fiat currencies. For example, the performance of the bitcoin network is not yet large enough. At present, it is operating at full capacity when it processes around 360,000 payments per day. In Germany alone, however, around 75 million transfers are made in one working day! Another problem with bitcoin transactions is finality. In modern fiat cash payment systems, there is a clearly identifiable point in time at which a payment is legally and de facto completed, and from that point on the money transferred can be used immediately. However, DLT consensus techniques (such as proof of work) only allow relative finality, and this is undoubtedly detrimental to the money user (because blocks added to the blockchain can subsequently become invalid by resolving forks).

The transaction costs are also of great importance regarding whether the bitcoin can assert itself as a universally used means of payment. In the recent past, there have been some major fluctuations in this area: In mid-June 2019, a transaction cost about $4.10, in December 2017 it peaked at more than $37, but in the meantime for many months it had been only $0.07. In addition, the time taken to process a transaction had also fluctuated considerably at times, which may be disadvantageous from the point of view of bitcoin users in view of the emergence of instant payment for fiat cash payments.

Another important aspect is the question of the “intermediary.” Bitcoin is designed to enable intermediary-free transactions between participants. But do the market participants really want intermediary–free money? What if there are problems? For example, if someone made a mistake and transferred one hundred bitcoins instead of one, he cannot reverse the transaction. And nobody can help him! The fact that many hold their bitcoins in trading venues and not in their private digital wallets suggests that even in a world of cryptocurrencies there is a demand for intermediaries offering services such as storage and security of private keys.

However, as soon as intermediaries come into play, the transaction chain is no longer limited to the digital world, but reaches the real world. At the interface between the digital and the real world, a trustworthy entity is required. Just think of credit transactions. They cannot be performed unseen (trustless) and anonymously. Payment defaults can happen here, and therefore the lender wants to know who the borrower is, what credit quality he has, what collateral he provides. And if the bridge is built from the digital to the real world, the crypto-money inevitably finds itself in the crosshairs of the state. However, this bridge will ultimately be necessary, because in modern economies with a division of labor, money must have the capacity for intermediation.

It is safe to assume that technology will continue to make progress, that it will remove many remaining obstacles. However, it can also be expected that the state will make every effort to discourage a free market for money, for example, by reducing the competitiveness of alternative money media such as precious metals and crypto-units vis-à-vis fiat money through tax measures (such as turnover and capital gains taxes). As long as this is the case, it will be difficult even for money that is better in all other respects to assert itself.

Therefore, technical superiority alone will probably not be sufficient to help free market money—whether in the form of gold, silver, or crypto-units—achieve a breakthrough. In addition, and above all, it will be necessary for people to demand their right to self-determination in the choice of money or to recognize the need to make use of it. Ludwig von Mises has cited the “sound-money principle” in this context: “[T]he sound-money principle has two aspects. It is affirmative in approving the market’s choice of a commonly used medium of exchange. It is negative in obstructing the government’s propensity to meddle with the currency system.” And he continues: “It is impossible to grasp the meaning of the idea of sound money if one does not realize that it was devised as an instrument for the protection of civil liberties against despotic inroads on the part of governments. Ideologically it belongs in the same class with political constitutions and bills of rights.”

These words make it clear that in order for a free market for money to become at all possible, quite a substantial change must take place in people’s minds. We must turn away from democratic socialism, from all socialist-collectivist false doctrines, from their state-glorifying delusion, no longer listen to socialist appeals to envy and resentment. This can only be achieved through better insight, acceptance of better ideas and logical thinking. Admittedly, this is a difficult undertaking, but it is not hopeless. Especially since there is a logical alternative to democratic socialism: the private law society with a free market for money. What this means is outlined in the final chapter of this book.

About the Author:

Dr. Thorsten Polleit is Chief Economist of Degussa and Honorary Professor at the University of Bayreuth. He also acts as an investment advisor.

[This article is adapted from Chapter 21 of The Global Currency Plot.]

ASO Success Treating Fragile X Syndrome

Image Credit: Thom Leach, The Conversation

Fragile X Syndrome Often Results from Improperly Processed Genetic Material – Correctly Cutting RNA Offers a Potential Treatment

Fragile X syndrome is a genetic disorder caused by a mutation in a gene that lies at the tip of the X chromosome. It is linked to autism spectrum disorders. People with fragile X experience a range of symptoms that include cognitive impairment, developmental and speech delays and hyperactivity. They may also have some physical features such as large ears and foreheads, flabby muscles and poor coordination.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Joel Richter, Professor of Neuroscience, UMass Chan Medical School and Sneha Shah, Assistant Professor of Molecular Medicine. UMass Chan Medical School

Along with our colleagues Jonathan Watts and Elizabeth Berry-Kravis, we are a team of scientists with expertise in molecular biology, nucleic acid chemistry and pediatric neurology. We recently discovered that the mutated gene responsible for fragile X syndrome is active in most people with the disorder, not silenced as previously thought. But the affected gene on the X chromosome is still unable to produce the protein it codes for because the genetic material isn’t properly processed. Correcting this processing error suggests that a potential treatment for symptoms of fragile X may one day be available.

Repairing Faulty RNA Splicing

The FMR1 gene encodes a protein that regulates protein synthesis. A lack of this protein leads to overall excessive protein synthesis in the brain that results in many of the symptoms of fragile X.

The mutation that causes fragile X results in extra copies of a DNA sequence called a CGG repeat. Everyone has CGG repeats in their FMR1 gene, but typically fewer than 55 copies. Having 200 or more CGG repeats silences the FMR1 gene and results in fragile X syndrome. However, we found that around 70% of people with fragile X still have an active FMR1 gene their cellular machinery can read. But it is mutated enough that it is unable to direct the cell to produce the protein it encodes.

Genes are transcribed into another form of genetic material called RNA that cells use to make proteins. Normally, genes are processed before transcription in order to make a readable strand of RNA. This involves removing the noncoding sequences that interrupt genes and splicing the genetic material back together. For people with fragile X, the cellular machinery that does this cutting incorrectly splices the genetic material, such that the protein the FMR1 gene codes for is not produced.

Fragile X syndrome is the most common inherited form of intellectual disability.

Using cell cultures in the lab, we found that correcting this missplice can restore proper RNA function and produce the FMR1 gene’s protein. We did this by using short bits of DNA called antisense oligonucleotides, or ASOs. When these bits of genetic material bind to RNA molecules, they change the way the cell can read it. That can have effects on which proteins the cell can successfully produce.

ASOs have been used with spectacular success to treat other childhood disorders, such as spinal muscular atrophy, and are now being used to treat a variety of neurological diseases.

Beyond Mice Models

Notably, fragile X syndrome is most often studied using mouse models. However, because these mice have been genetically engineered to lack a functional FMR1 gene, they are quite different from people with fragile X. In people, it is not a missing gene that causes fragile X but mutations that lead the existing gene to lose function.

Because the mouse model of fragile X lacks the FMR1 gene, the RNA is not made and so cannot be misspliced. Our discovery would not have been possible if we used mice.

With further research, future studies in people may one day include injecting ASOs into the cerebrospinal fluid of fragile X patients, where it will travel to the brain and hopefully restore proper function of the FMR1 gene and improve their cognitive function.

ChatGPT Shortcomings Include Hallucinations, Bias, and Privacy Breaches

Full Disclosure of Limitations May Be the Quick Fix to AI Limitations

The Federal Trade Commission has launched an investigation of ChatGPT maker OpenAI for potential violations of consumer protection laws. The FTC sent the company a 20-page demand for information in the week of July 10, 2023. The move comes as European regulators have begun to take action, and Congress is working on legislation to regulate the artificial intelligence industry.

The FTC has asked OpenAI to provide details of all complaints the company has received from users regarding “false, misleading, disparaging, or harmful” statements put out by OpenAI, and whether OpenAI engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm. The agency has asked detailed questions about how OpenAI obtains its data, how it trains its models, the processes it uses for human feedback, risk assessment and mitigation, and its mechanisms for privacy protection.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Anjana Susarla, Professor of Information Systems, Michigan State University.

As a researcher of social media and AI, I recognize the immensely transformative potential of generative AI models, but I believe that these systems pose risks. In particular, in the context of consumer protection, these models can produce errors, exhibit biases and violate personal data privacy.

Hidden Power

At the heart of chatbots such as ChatGPT and image generation tools such as DALL-E lies the power of generative AI models that can create realistic content from text, images, audio and video inputs. These tools can be accessed through a browser or a smartphone app.

Since these AI models have no predefined use, they can be fine-tuned for a wide range of applications in a variety of domains ranging from finance to biology. The models, trained on vast quantities of data, can be adapted for different tasks with little to no coding and sometimes as easily as by describing a task in simple language.

Given that AI models such as GPT-3 and GPT-4 were developed by private organizations using proprietary data sets, the public doesn’t know the nature of the data used to train them. The opacity of training data and the complexity of the model architecture – GPT-3 was trained on over 175 billion variables or “parameters” – make it difficult for anyone to audit these models. Consequently, it’s difficult to prove that the way they are built or trained causes harm.

Hallucinations

In language model AIs, a hallucination is a confident response that is inaccurate and seemingly not justified by a model’s training data. Even some generative AI models that were designed to be less prone to hallucinations have amplified them.

There is a danger that generative AI models can produce incorrect or misleading information that can end up being damaging to users. A study investigating ChatGPT’s ability to generate factually correct scientific writing in the medical field found that ChatGPT ended up either generating citations to nonexistent papers or reporting nonexistent results. My collaborators and I found similar patterns in our investigations.

Such hallucinations can cause real damage when the models are used without adequate supervision. For example, ChatGPT falsely claimed that a professor it named had been accused of sexual harassment. And a radio host has filed a defamation lawsuit against OpenAI regarding ChatGPT falsely claiming that there was a legal complaint against him for embezzlement.

Bias and Discrimination

Without adequate safeguards or protections, generative AI models trained on vast quantities of data collected from the internet can end up replicating existing societal biases. For example, organizations that use generative AI models to design recruiting campaigns could end up unintentionally discriminating against some groups of people.

When a journalist asked DALL-E 2 to generate images of “a technology journalist writing an article about a new AI system that can create remarkable and strange images,” it generated only pictures of men. An AI portrait app exhibited several sociocultural biases, for example by lightening the skin color of an actress.

Data Privacy

Another major concern, especially pertinent to the FTC investigation, is the risk of privacy breaches where the AI may end up revealing sensitive or confidential information. A hacker could gain access to sensitive information about people whose data was used to train an AI model.

Researchers have cautioned about risks from manipulations called prompt injection attacks, which can trick generative AI into giving out information that it shouldn’t. “Indirect prompt injection” attacks could trick AI models with steps such as sending someone a calendar invitation with instructions for their digital assistant to export the recipient’s data and send it to the hacker.

Some Solutions

The European Commission has published ethical guidelines for trustworthy AI that include an assessment checklist for six different aspects of AI systems: human agency and oversight; technical robustness and safety; privacy and data governance; transparency, diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability.

Better documentation of AI developers’ processes can help in highlighting potential harms. For example, researchers of algorithmic fairness have proposed model cards, which are similar to nutritional labels for food. Data statements and datasheets, which characterize data sets used to train AI models, would serve a similar role.

Amazon Web Services, for instance, introduced AI service cards that describe the uses and limitations of some models it provides. The cards describe the models’ capabilities, training data and intended uses.

The FTC’s inquiry hints that this type of disclosure may be a direction that U.S. regulators take. Also, if the FTC finds OpenAI has violated consumer protection laws, it could fine the company or put it under a consent decree.

Shrinkflation and Skimpflation Are Eating Our Lunch

Image Credit: Brett Jordan (Flickr)

Why Economic Data Doesn’t Reconcile With Personal Experience

Does grocery shopping and eating out cost the same as it did in 2019? Government statistics on personal consumption and expenditures would seem to indicate they do. Most of us know that we are paying noticeably more to eat than we did a few years ago. Below is an article explaining the flaws in government data and the nuances that hide actual experience from this set of numbers. It is written by Dr. Jonathan Newman, he is a Fellow at the Mises Institute, his research focuses on inflation and business cycles, and the history of economic thought.  – Paul Hoffman, Managing Editor, Channelchek.

Economist Jeremy Horpedahl dismissed the silly claim by anticapitalists that capitalism must engineer food scarcity for the sake of profits. He presented a graph of Bureau of Labor Statistics (BLS) data demonstrating a substantial decrease in household food expenditure as a percentage of income—from 44 percent in 1901 to a mere 9 percent in 2021. This is something to celebrate and certainly can be attributed to the abundance of market economies.

But when Jordan Peterson asked, “And what’s happened the last two years?” I went digging. First, I confirmed Horpedahl’s observation: the amount we spend on food as a proportion of our budget has fallen dramatically. Second, I saw what Peterson hinted at: a significant spike in food spending when covid and the associated mess of government interventions hit (figure 1).

Figure 1: Food and personal consumption expenditures, 1959–2023

Source: US Bureau of Economic Analysis, FRED.

Interestingly, the spike looks like a blip. Someone oblivious to the events of the past few years might see this chart and say, “Yeah, something strange happened in 2020, but it looks like everything is back to normal.” I’m certain that this doesn’t align with anyone’s experience, however. Even today, no one would say that restaurant visits and grocery store trips cost the same as they did in 2019.

What changed in 2020? Why does this graph not feel right? Assuming the Bureau of Economic Analysis data isn’t totally off (and it is important to be skeptical of government data), why would a January 2023 report on consumer inflation sentiment conclude that “there is a disconnect between the inflation data reported by the government and what consumers say they now pay for necessities”?

The difference lies in the qualitative aspects of our experience as consumers. Spending proportions may have returned to their trend, but that isn’t the whole story. “Shrinkflation” and “skimpflation” have taken their toll on the quantity and quality of the food we enjoy—or maybe the food we tolerate is more apt.

Businesses know that charging higher prices is unpopular, especially when many consumers are convinced that greed is driving price inflation. So businesses resort to reducing the amount of food in the package, diluting the product but keeping the same amount, or otherwise cutting corners in ways that consumers may not immediately notice.

Thankfully, websites such as mouseprint.org document some of these cases:

Sara Lee blueberry bagels reduced from 1 lb., 4.0 oz. per bag to 1 lb., 0.7 oz.

Bounty “double rolls” reduced from 98 sheets to 90 (how is it still a “double roll”?)

Gain laundry detergent containers reduced from 92 fl. oz. to 88 fl. oz. without any obvious difference in the size of the container

Dawn dish soap bottles reduced from 19.4 fl. oz. to 18.0 fl. oz.

Green Giant frozen broccoli and cheese sauce packages reduced from 10.0 oz. to 8.0 oz. with no change in the advertised number of servings per package

In some instances of skimpflation, the volume or weight of a product remains the same, but the proportions change. For example, Hungry-Man Double Chicken Bowls (a frozen dinner of fried chicken and macaroni and cheese) maintained a net weight of 15.0 oz., but the protein content dropped from 39 grams to 33 grams.

And while firms are reducing the quantity and quality of the food they sell, consumers are also choosing to purchase less food and even lower-quality food. The January 2023 report on consumer inflation sentiment shows that 69.4 percent of respondents “reduced quantity, quality or both in their grocery purchases due to price increases over the last 12 months.”

We have also seen a widespread and long-lasting change in customer service at restaurants. Many restaurants switched to providing only takeout for months or years. Even though the dine-in option has been reintroduced at some restaurants, the service hasn’t quite been the same, with QR-code menus, shorter hours, less staff, and terse demeanors.

It’s not surprising that the massive government interventions, including creating trillions of new dollars, would have countless effects—some that show up in various statistics but many that do not. For example, if we look back at the period of German hyperinflation, we see surprisingly boring data on food spending proportions (figure 2).

Figure 2: Household expenditures in Germany, 1920–22

Source: Data from Carl-Ludwig Holtfrerich, The German Inflation, 1914–1923: Causes and Effects in International Perspective, trans. Theo Balderston (New York: Walter de Gruyter, 1986), cited in Gerald D. Feldman, The Great Disorder: Politics, Economics, and Society in the German Inflation 1914–1924 (New York: Oxford University Press, 1997), p. 549.

Historian Gerald D. Feldman commented on the German household expenditure data in a way that sounds familiar: “As one study after another pointed out, however, the full impact of these changes had to be understood in qualitative terms.” There was “reduced quality and quantity of the food consumed” and “poorer quality clothing,” among other qualitative changes.

Government statistics are unable to capture these subtleties. This should be obvious—your personal experience as a consumer is more than just the price you pay for a certain weight of food. We aren’t merely machines; we don’t describe our lives in miles per gallon or kilowatt hours.

This is why Ludwig von Mises attacked the conceited aggregates and indexes purported to measure various aspects of consumers’ lives: “The pretentious solemnity which statisticians and statistical bureaus display in computing indexes of purchasing power and cost of living is out of place. These index numbers are at best rather crude and inaccurate illustrations of changes which have occurred.”

He concludes: “A judicious housewife knows much more about price changes as far as they affect her own household than the statistical averages can tell.”

Original Source

https://mises.org/wire/shrinkflation-and-skimpflation-are-eating-our-lunch

Study Finds Substantial Benefits Using ChatGPT to Boost Worker Productivity  

For Some White Collar Writing Tasks Chatbots Increased Productivity by 40%

Amid a huge amount of hype around generative AI, a new study from researchers at MIT sheds light on the technology’s impact on work, finding that it increased productivity for workers assigned tasks like writing cover letters, delicate emails, and cost-benefit analyses.

The tasks in the study weren’t quite replicas of real work: They didn’t require precise factual accuracy or context about things like a company’s goals or a customer’s preferences. Still, a number of the study’s participants said the assignments were similar to things they’d written in their real jobs — and the benefits were substantial. Access to the assistive chatbot ChatGPT decreased the time it took workers to complete the tasks by 40 percent, and output quality, as measured by independent evaluators, rose by 18 percent.

The researchers hope the study, which appears in open-access form in the journal Science, helps people understand the impact that AI tools like ChatGPT can have on the workforce.

“What we can say for sure is generative AI is going to have a big effect on white collar work,” says Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper with fellow PhD student Whitney Zhang ’21. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust.”

Simulating Work for Chatbots

For centuries, people have worried that new technological advancements would lead to mass automation and job loss. But new technologies also create new jobs, and when they increase worker productivity, they can have a net positive effect on the economy.

“Productivity is front of mind for economists when thinking of new technological developments,” Noy says. “The classical view in economics is that the most important thing that technological advancement does is raise productivity, in the sense of letting us produce economic output more efficiently.”

To study generative AI’s effect on worker productivity, the researchers gave 453 college-educated marketers, grant writers, consultants, data analysts, human resource professionals, and managers two writing tasks specific to their occupation. The 20- to 30-minute tasks included writing cover letters for grant applications, emails about organizational restructuring, and plans for analyses helping a company decide which customers to send push notifications to based on given customer data. Experienced professionals in the same occupations as each participant evaluated each submission as if they were encountering it in a work setting. Evaluators did not know which submissions were created with the help of ChatGPT.

Half of participants were given access to the chatbot ChatGPT-3.5, developed by the company OpenAI, for the second assignment. Those users finished tasks 11 minutes faster than the control group, while their average quality evaluations increased by 18 percent.

The data also showed that performance inequality between workers decreased, meaning workers who received a lower grade in the first task benefitted more from using ChatGPT for the second task.

The researchers say the tasks were broadly representative of assignments such professionals see in their real jobs, but they noted a number of limitations. Because they were using anonymous participants, the researchers couldn’t require contextual knowledge about a specific company or customer. They also had to give explicit instructions for each assignment, whereas real-world tasks may be more open-ended. Additionally, the researchers didn’t think it was feasible to hire fact-checkers to evaluate the accuracy of the outputs. Accuracy is a major problem for today’s generative AI technologies.

The researchers said those limitations could lessen ChatGPT’s productivity-boosting potential in the real world. Still, they believe the results show the technology’s promise — an idea supported by another of the study’s findings: Workers exposed to ChatGPT during the experiment were twice as likely to report using it in their real job two weeks after the experiment.

“The experiment demonstrates that it does bring significant speed benefits, even if those speed benefits are lesser in the real world because you need to spend time fact-checking and writing the prompts,” Noy says.

Taking the Macro View

The study offered a close-up look at the impact that tools like ChatGPT can have on certain writing tasks. But extrapolating that impact out to understand generative AI’s effect on the economy is more difficult. That’s what the researchers hope to work on next.

“There are so many other factors that are going to affect wages, employment, and shifts across sectors that would require pieces of evidence that aren’t in our paper,” Zhang says. “But the magnitude of time saved and quality increases are very large in our paper, so it does seem like this is pretty revolutionary, at least for certain types of work.”

Both researchers agree that, even if it’s accepted that ChatGPT will increase many workers’ productivity, much work remains to be done to figure out how society should respond to generative AI’s proliferation.

“The policy needed to adjust to these technologies can be very different depending on what future research finds,” Zhang says. “If we think this will boost wages for lower-paid workers, that’s a very different implication than if it’s going to increase wage inequality by boosting the wages of already high earners. I think there’s a lot of downstream economic and political effects that are important to pin down.”

The study was supported by an Emergent Ventures grant, the Mercatus Center, George Mason University, a George and Obie Shultz Fund grant, the MIT Department of Economics, and a National Science Foundation Graduate Research Fellowship Grant.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

Harnessing the Power of Microglia

Immune Cells in the Brain May Reduce Damage During Seizures and Promote Recovery

Seizures are like sudden electrical storms in the brain. Seizure disorders like epilepsy affect over 65 million people worldwide and can have profound effects on a person’s quality of life, cognitive function and overall well-being. Prolonged seizures called status epilepticus can cause lasting brain damage.

Specialized immune cells in the brain called microglia are activated during seizures to help clean up the damage. Researchers don’t fully understand exactly how these cells are involved in seizures. Some studies have found that microglia promote seizures, while other studies show the opposite.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Synphane Gibbs-Shelton, Ph.D. Candidate in Pharmacology, University of Virginia.

I am a scientist who studies the roles that microglia play in seizures. My colleagues and I at the Eyo Lab at the University of Virginia wanted to investigate the possible protective function microglia serve during seizures and how they affect recovery.

We induced seizures in mice using three different methods – chemical, hyperthermic and electrical – and temporarily removed their microglia. In all three cases, we found that seizures worsened when these cells were absent. Mice without microglia also experienced significant weight loss and decrease in mobility compared with mice with microglia.

Our findings highlight the importance of microglia in safeguarding the brain during seizures and promoting recovery; but they also raise important questions about how these cells provide a protective rather than detrimental effect.

While removing all microglia allowed us to better understand their overall effects on seizures, it meant we were unable to fully assess their contributions in specific brain regions and how they interact with other cells. This is because removing microglia also affects the function of other brain cells. Future studies that more selectively modify microglia or alter their function in a controlled way could help researchers gain a more nuanced understanding of the role these cells play in seizures.

This video shows microglia moving in cell culture.

Researchers also don’t fully understand what specific molecules and signals microglia use to protect the brain during seizures. How well our findings apply to seizure disorders like epilepsy is also unclear. These knowledge gaps highlight the complexity of seizure disorders and the need for continued study.

Identifying strategies to harness the beneficial functions of microglia can help researchers develop better treatments that prevent long-term brain damage and enhance the quality of life of people with seizure disorders.