The Digital Future May Rely on Ultrafast Optical Electronics and Computers
If you’ve ever wished for a faster phone, computer or internet connection, you’ve encountered the personal experience of hitting a limit of technology. But there might be help on the way.
Over the past several decades, scientists and engineers have worked to develop faster transistors, the electronic components underlying modern electronic and digital communications technologies. These efforts have been based on a category of materials called semiconductors that have special electrical properties. Silicon is perhaps the best-known example of this type of material.
But about a decade ago, scientific efforts hit the speed limit of semiconductor-based transistors. Researchers simply can’t make electrons move faster through these materials. One way engineers are trying to address the speed limits inherent in moving a current through silicon is to design shorter physical circuits – essentially giving electrons less distance to travel. Increasing the computing power of a chip comes down to increasing the number of transistors. However, even if researchers are able to get transistors to be very small, they won’t be fast enough for the faster processing and data transfer speeds people and businesses will need.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Mohammed Hassan, Associate Professor of Physics and Optical Sciences, University of Arizona.
My research group’s work aims to develop faster ways to move data, using ultrafast laser pulses in free space and optical fiber. The laser light travels through optical fiber with almost no loss and with a very low level of noise.
In our most recent study, published in February 2023 in Science Advances, we took a step toward that, demonstrating that it’s possible to use laser-based systems equipped with optical transistors, which depend on photons rather than voltage to move electrons, and to transfer information much more quickly than current systems – and do so more effectively than previously reported optical switches.
Ultrafast Optical Transistors
At their most fundamental level, digital transmissions involve a signal switching on and off to represent ones and zeros. Electronic transistors use voltage to send this signal: When the voltage induces the electrons to flow through the system, they signal a 1; when there are no electrons flowing, that signals a 0. This requires a source to emit the electrons and a receiver to detect them.
Our system of ultrafast optical data transmission is based on light rather than voltage. Our research group is one of many working with optical communication at the transistor level – the building blocks of modern processors – to get around the current limitations with silicon.
Our system controls reflected light to transmit information. When light shines on a piece of glass, most of it passes through, though a little bit might reflect. That is what you experience as glare when driving toward sunlight or looking through a window.
We use two laser beams transmitted from two sources passing through the same piece of glass. One beam is constant, but its transmission through the glass is controlled by the second beam. By using the second beam to shift the properties of the glass from transparent to reflective, we can start and stop the transmission of the constant beam, switching the optical signal from on to off and back again very quickly.
With this method, we can switch the glass properties much more quickly than current systems can send electrons. So we can send many more on and off signals – zeros and ones – in less time.
How Fast are We Talking?
Our study took the first step to transmitting data 1 million times faster than if we had used the typical electronics. With electrons, the maximum speed for transmitting data is a nanosecond, one-billionth of a second, which is very fast. But the optical switch we constructed was able to transmit data a million times faster, which took just a few hundred attoseconds.
We were also able to transmit those signals securely so that an attacker who tried to intercept or modify the messages would fail or be detected.
Using a laser beam to carry a signal, and adjusting its signal intensity with glass controlled by another laser beam, means the information can travel not only more quickly but also much greater distances.
For instance, the James Webb Space Telescope recently transmitted stunning images from far out in space. These pictures were transferred as data from the telescope to the base station on Earth at a rate of one “on” or “off” every 35 nanosconds using optical communications.
A laser system like the one we’re developing could speed up the transfer rate a billionfold, allowing faster and clearer exploration of deep space, more quickly revealing the universe’s secrets. And someday computers themselves might run on light.
Almost No One Uses Bitcoin as Currency, New Data Proves. It’s Actually More Like Gambling
In recent weeks the asset status of Bitcoin has gained additional legitimacy as an asset but has done little to bolster any claim that it is a medium of exchange for goods and services. Does this matter? A Senior Lecturer on Economics and Society shares his thoughts on the present and future of Bitcoin and how that compares with its promise. – Paul Hoffman, Managing Editor, Channelchek
Bitcoin boosters like to claim Bitcoin, and other cryptocurrencies, are becoming mainstream. There’s a good reason to want people to believe this.
The only way the average punter will profit from crypto is to sell it for more than they bought it. So it’s important to talk up the prospects to build a “fear of missing out”.
There are loose claims that a large proportion of the population – generally in the range of 10% to 20% – now hold crypto. Sometimes these numbers are based on counting crypto wallets, or on surveying wealthy people.
But the hard data on Bitcoin use shows it is rarely bought for the purpose it ostensibly exists: to buy things.
Little Use for Payments
The whole point of Bitcoin, as its creator “Satoshi Nakamoto” stated in the opening sentence of the 2008 white paper outlining the concept, was that:
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.
The latest data demolishing this idea comes from Australia’s central bank.
Every three years the Reserve Bank of Australia surveys a representative sample of 1,000 adults about how they pay for things. As the following graph shows, cryptocurrency is making almost no impression as a payments instrument, being used by no more than 2% of adults.
Payment Methods Being Used by Australians
By contrast more recent innovations, such as “buy now, pay later” services and PayID, are being used by around a third of consumers.
These findings confirm 2022 data from the US Federal Reserve, showing just 2% of the adult US population made a payment using a cryptocurrrency, and Sweden’s Riksbank, showing less than 1% of Swedes made payments using crypto.
The Problem of Price Volatility
One reason for this, and why prices for goods and services are virtually never expressed in crypto, is that most fluctuate wildly in value. A shop or cafe with price labels or a blackboard list of their prices set in Bitcoin could be having to change them every hour.
The following graph from the Bank of International Settlements shows changes in the exchange rate of ten major cryptocurrencies against the US dollar, compared with the Euro and Japan’s Yen, over the past five years. Such volatility negates cryptocurrency’s value as a currency.
Cryptocurrency’s Volatile Ways
There have been attempts to solve this problem with so-called “stablecoins”. These promise to maintain steady value (usually against the US dollar).
But the spectacular collapse of one of these ventures, Terra, once one of the largest cryptocurrencies, showed the vulnerability of their mechanisms. Even a company with the enormous resources of Facebook owner Meta has given up on its stablecoin venture, Libra/Diem.
This helps explain the failed experiments with making Bitcoin legal tender in the two countries that have tried it: El Salvador and the Central African Republic. The Central African Republic has already revoked Bitcoin’s status. In El Salvador only a fifth of firms accept Bitcoin, despite the law saying they must, and only 5% of sales are paid in it.
Storing Value, Hedging Against Inflation
If Bitcoin’s isn’t used for payments, what use does it have?
The major attraction – one endorsed by mainstream financial publications – is as a store of value, particularly in times of inflation, because Bitcoin has a hard cap on the number of coins that will ever be “mined”.
In terms of quantity, there are only 21 million Bitcoins released as specified by the ASCII computer file. Therefore, because of an increase in demand, the value will rise which might keep up with the market and prevent inflation in the long run.
The only problem with this argument is recent history. Over the course of 2022 the purchasing power of major currencies (US, the euro and the pound) dropped by about 7-10%. The purchasing power of a Bitcoin dropped by about 65%.
Speculation or Gambling?
Bitcoin’s price has always been volatile, and always will be. If its price were to stabilize somehow, those holding it as a speculative punt would soon sell it, which would drive down the price.
But most people buying Bitcoin essentially as a speculative token, hoping its price will go up, are likely to be disappointed. A BIS study has found the majority of Bitcoin buyers globally between August 2015 and December 2022 have made losses.
The “market value” of all cryptocurrencies peaked at US$3 trillion in November 2021. It is now about US$1 trillion.
Bitcoins’s highest price in 2021 was about US$60,000; in 2022 US$40,000 and so far in 2023 only US$30,000. Google searches show that public interest in Bitcoin also peaked in 2021. In the US, the proportion of adults with internet access holding cryptocurrencies fell from 11% in 2021 to 8% in 2022.
UK government research published in 2022 found that 52% of British crypto holders owned it as a “fun investment”, which sounds like a euphemism for gambling. Another 8% explicitly said it was for gambling.
The UK parliament’s Treasury Committee, a group of MPs who examine economics and financial issues, has strongly recommended regulating cryptocurrency as form of gambling rather than as a financial product. They argue that continuing to treat “unbacked crypto assets as a financial service will create a ‘halo’ effect that leads consumers to believe that this activity is safer than it is, or protected when it is not”.
Whatever the merits of this proposal, the UK committtee’s underlying point is solid. Buying crypto does have more in common with gambling than investing. Proceed at your own risk, and and don’t “invest” what you can’t afford to lose.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, John Hawkins, Senior Lecturer, Canberra School of Politics, Economics and Society, University of Canberra.
How Will AI Affect Workers? Tech Waves of the Past Show How Unpredictable the Path Can Be
The explosion of interest in artificial intelligence has drawn attention not only to the astonishing capacity of algorithms to mimic humans but to the reality that these algorithms could displace many humans in their jobs. The economic and societal consequences could be nothing short of dramatic.
The route to this economic transformation is through the workplace. A widely circulated Goldman Sachs study anticipates that about two-thirds of current occupations over the next decade could be affected and a quarter to a half of the work people do now could be taken over by an algorithm. Up to 300 million jobs worldwide could be affected. The consulting firm McKinsey released its own study predicting an AI-powered boost of US$4.4 trillion to the global economy every year.
The implications of such gigantic numbers are sobering, but how reliable are these predictions?
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts University.
I lead a research program called Digital Planet that studies the impact of digital technologies on lives and livelihoods around the world and how this impact changes over time. A look at how previous waves of such digital technologies as personal computers and the internet affected workers offers some insight into AI’s potential impact in the years to come. But if the history of the future of work is any guide, we should be prepared for some surprises.
The IT Revolution and the Productivity Paradox
A key metric for tracking the consequences of technology on the economy is growth in worker productivity – defined as how much output of work an employee can generate per hour. This seemingly dry statistic matters to every working individual, because it ties directly to how much a worker can expect to earn for every hour of work. Said another way, higher productivity is expected to lead to higher wages.
Generative AI products are capable of producing written, graphic and audio content or software programs with minimal human involvement. Professions such as advertising, entertainment and creative and analytical work could be among the first to feel the effects. Individuals in those fields may worry that companies will use generative AI to do jobs they once did, but economists see great potential to boost productivity of the workforce as a whole.
The Goldman Sachs study predicts productivity will grow by 1.5% per year because of the adoption of generative AI alone, which would be nearly double the rate from 2010 and 2018. McKinsey is even more aggressive, saying this technology and other forms of automation will usher in the “next productivity frontier,” pushing it as high as 3.3% a year by 2040.That sort of productivity boost, which would approach rates of previous years, would be welcomed by both economists and, in theory, workers as well.
If we were to trace the 20th-century history of productivity growth in the U.S., it galloped along at about 3% annually from 1920 to 1970, lifting real wages and living standards. Interestingly, productivity growth slowed in the 1970s and 1980s, coinciding with the introduction of computers and early digital technologies. This “productivity paradox” was famously captured in a comment from MIT economist Bob Solow: You can see the computer age everywhere but in the productivity statistics.
Digital technology skeptics blamed “unproductive” time spent on social media or shopping and argued that earlier transformations, such as the introductions of electricity or the internal combustion engine, had a bigger role in fundamentally altering the nature of work. Techno-optimists disagreed; they argued that new digital technologies needed time to translate into productivity growth, because other complementary changes would need to evolve in parallel. Yet others worried that productivity measures were not adequate in capturing the value of computers.
For a while, it seemed that the optimists would be vindicated. In the second half of the 1990s, around the time the World Wide Web emerged, productivity growth in the U.S. doubled, from 1.5% per year in the first half of that decade to 3% in the second. Again, there were disagreements about what was really going on, further muddying the waters as to whether the paradox had been resolved. Some argued that, indeed, the investments in digital technologies were finally paying off, while an alternative view was that managerial and technological innovations in a few key industries were the main drivers.
Regardless of the explanation, just as mysteriously as it began, that late 1990s surge was short-lived. So despite massive corporate investment in computers and the internet – changes that transformed the workplace – how much the economy and workers’ wages benefited from technology remained uncertain.
Early 2000s: New Slump, New Hype, New Hopes
While the start of the 21st century coincided with the bursting of the so-called dot-com bubble, the year 2007 was marked by the arrival of another technology revolution: the Apple iPhone, which consumers bought by the millions and which companies deployed in countless ways. Yet labor productivity growth started stalling again in the mid-2000s, ticking up briefly in 2009 during the Great Recession, only to return to a slump from 2010 to 2019.
Smartphones have led to millions of apps and consumer services but have also kept many workers more closely tethered to their workplaces. (Credit: Campaigns of the World)
Throughout this new slump, techno-optimists were anticipating new winds of change. AI and automation were becoming all the rage and were expected to transform work and worker productivity. Beyond traditional industrial automation, drones and advanced robots, capital and talent were pouring into many would-be game-changing technologies, including autonomous vehicles, automated checkouts in grocery stores and even pizza-making robots. AI and automation were projected to push productivity growth above 2% annually in a decade, up from the 2010-2014 lows of 0.4%.But before we could get there and gauge how these new technologies would ripple through the workplace, a new surprise hit: the COVID-19 pandemic.
The Pandemic Productivity Push – then Bust
Devastating as the pandemic was, worker productivity surged after it began in 2020; output per hour worked globally hit 4.9%, the highest recorded since data has been available.
Much of this steep rise was facilitated by technology: larger knowledge-intensive companies – inherently the more productive ones – switched to remote work, maintaining continuity through digital technologies such as videoconferencing and communications technologies such as Slack, and saving on commuting time and focusing on well-being.
While it was clear digital technologies helped boost productivity of knowledge workers, there was an accelerated shift to greater automation in many other sectors, as workers had to remain home for their own safety and comply with lockdowns. Companies in industries ranging from meat processing to operations in restaurants, retail and hospitality invested in automation, such as robots and automated order-processing and customer service, which helped boost their productivity.
But then there was yet another turn in the journey along the technology landscape.
The 2020-2021 surge in investments in the tech sector collapsed, as did the hype about autonomous vehicles and pizza-making robots. Other frothy promises, such as the metaverse’s revolutionizing remote work or training, also seemed to fade into the background.
In parallel, with little warning, “generative AI” burst onto the scene, with an even more direct potential to enhance productivity while affecting jobs – at massive scale. The hype cycle around new technology restarted.
Looking Ahead: Social Factors on Technology’s Arc
Given the number of plot twists thus far, what might we expect from here on out? Here are four issues for consideration.
First, the future of work is about more than just raw numbers of workers, the technical tools they use or the work they do; one should consider how AI affects factors such as workplace diversity and social inequities, which in turn have a profound impact on economic opportunity and workplace culture.
For example, while the broad shift toward remote work could help promote diversity with more flexible hiring, I see the increasing use of AI as likely to have the opposite effect. Black and Hispanic workers are overrepresented in the 30 occupations with the highest exposure to automation and underrepresented in the 30 occupations with the lowest exposure. While AI might help workers get more done in less time, and this increased productivity could increase wages of those employed, it could lead to a severe loss of wages for those whose jobs are displaced. A 2021 paper found that wage inequality tended to increase the most in countries in which companies already relied a lot on robots and that were quick to adopt the latest robotic technologies.
Second, as the post-COVID-19 workplace seeks a balance between in-person and remote working, the effects on productivity – and opinions on the subject – will remain uncertain and fluid. A 2022 study showed improved efficiencies for remote work as companies and employees grew more comfortable with work-from-home arrangements, but according to a separate 2023 study, managers and employees disagree about the impact: The former believe that remote working reduces productivity, while employees believe the opposite.
Third, society’s reaction to the spread of generative AI could greatly affect its course and ultimate impact. Analyses suggest that generative AI can boost worker productivity on specific jobs – for example, one 2023 study found the staggered introduction of a generative AI-based conversational assistant increased productivity of customer service personnel by 14%. Yet there are already growing calls to consider generative AI’s most severe risks and to take them seriously. On top of that, recognition of the astronomical computing and environmental costs of generative AI could limit its development and use.
Finally, given how wrong economists and other experts have been in the past, it is safe to say that many of today’s predictions about AI technology’s impact on work and worker productivity will prove to be wrong as well. Numbers such as 300 million jobs affected or $4.4 trillion annual boosts to the global economy are eye-catching, yet I think people tend to give them greater credibility than warranted.
Also, “jobs affected” does not mean jobs lost; it could mean jobs augmented or even a transition to new jobs. It is best to use the analyses, such as Goldman’s or McKinsey’s, to spark our imaginations about the plausible scenarios about the future of work and of workers. It’s better, in my view, to then proactively brainstorm the many factors that could affect which one actually comes to pass, look for early warning signs and prepare accordingly.
The history of the future of work has been full of surprises; don’t be shocked if tomorrow’s technologies are equally confounding.
Dollar Global Usefulness Can Not Easily be Replaced
The prospect of the dollar being knocked from its perch as the primary fiat currency is worrisome to many Americans. Anxiety has been recently increased by news of short-term arrangements in which countries want to exchange more directly with one another in their native currency. China has established quite a few of these agreements over the past year. But is the widespread global use of the dollar in jeopardy?
Daniel Gros is a Professor of Practice and Director of the Institute for European Policymaking at Bocconi University. In the article below, originally published in The Conversation, Professor Gros offers his insight and expectations for the US currency.
Is the end of the dollar’s reign upon us? The prospect is worrisome to Americans.
The position of the US dollar in the global league table of foreign exchange reserves held by other countries is closely watched. Every slight fall in its share is interpreted as confirmation of its imminent demise as the preferred global currency for financial transactions.
The recent drama surrounding negotiations about raising the limit on US federal government debt has only fuelled these predictions by “dollar doomsayers”, who believe repeated crises over the US government’s borrowing limit weakens the country’s perceived stability internationally.
But the real foundation of its dominance is global trade – and it would be very complicated to turn the tide of these many transactions away from the US dollar.
The international role of a global currency in financial markets is ultimately based on its use in non-financial transactions, especially as what’s called an “invoicing currency” in trade. This is the currency in which a company charges its customers.
Modern trade can involve many financial transactions. Today’s supply chains often see goods shipped across several borders, and that’s after they are produced using a combination of intermediate inputs, usually from different countries.
Suppliers may also only get paid after delivery, meaning they have to finance production beforehand. Obtaining this financing in the currency in which they invoice makes trade easier and more cost effective.
In fact, it would be very inconvenient for all participants in a value chain if the invoicing and financing of each element of the chain happened in a different currency. Similarly, if most trade is invoiced and financed in one currency (the US dollar at present), even banks and firms outside the US have an incentive to denominate and settle financial transactions in that currency.
This status quo becomes difficult to change because no individual organisation along the chain has an incentive to switch currencies if others aren’t doing the same.
This is why the US dollar is the most widely used currency in third-country transactions – those that don’t even involve the US. In such situations it’s called a vehicle currency. The euro is used mainly in the vicinity of Europe, whereas the US dollar is widely used in international trade among Asian countries. Researchers call this the dominant currency paradigm.
The convenience of using the US dollar, even outside its home country, is further buttressed by the openness and size of US financial markets. They make up 36% of the world’s total or five times more than the euro area’s markets. Most trade-related financial transactions involve the use of short-term credit, like using a credit card to buy something. As a result, the banking systems of many countries must then be at least partially based on the dollar so they can provide this short-term credit.
And so, these banks need to invest in the US financial markets to refinance themselves in dollars. They can then provide this to their clients as dollar-based short-term loans.
It’s fair to say, then, that the US dollar has not become the premier global currency only because of US efforts to foster its use internationally. It will also continue to dominate as long as private organisations engaged in international trade and finance find it the most convenient currency to use.
What Could Knock the US Dollar Off its Perch?
Some governments such as that of China might try to offer alternatives to the US dollar, but they are unlikely to succeed.
Government-to-government transactions, for example for crude oil between China and Saudi Arabia, could be denominated in yuan. But then the Saudi government would have to find something to do with the Chinese currency it receives. Some could be used to pay for imports from China, but Saudi Arabia imports a lot less from China (about US$30 billion) than it exports (about US$49 billion) to the country.
The US$600 billion Public Investment Fund (PIF), Saudi Arabia’s sovereign wealth fund, could of course use the yuan to invest in China. But this is difficult on a large scale because Chinese currency remains only partially “convertible”. This means that the Chinese authorities still control many transactions in and out of China, so that the PIF might not be able to use its yuan funds as and when it needs them. Even without convertibility restrictions, few private investors, and even fewer western investment funds, would be keen to put a lot of money into China if they are at the mercy of the Communist party.
China is of course the country with the strongest political motives to challenge the hegemony of the US dollar. A natural first step would be for China to diversify its foreign exchange reserves away from the US by investing in other countries. But this is easier said than done.
There are few opportunities to invest hundreds or thousands of billions of dollars outside of the US. Figures from the Bank of International Settlements show that the euro area bond market – a place for investors to finance loans to Euro area companies and governments – is worth less than one third of that of the US.
Also, in any big crisis, other major OECD economies like Europe and Japan are more likely to side with the US than China – making such a decision is even easier when they are using US dollars for trade. It was said that states accounting for one-half of the global population refused to condemn Russia’s invasion of Ukraine, but this half does not account for a large share of global financial markets.
Similarly, it shouldn’t come as a surprise that democracies dominate the world financially. Companies and financial markets require trust and a well-established rule of law. Non-democratic regimes have no basis for establishing the rule of law and every investor is ultimately subject to the whims of the ruler.
When it comes to global trade, currency use is underpinned by a self-reinforcing network of transactions. Because of this, and the size of the US financial market, the dollar’s dominant position remains something for the US to lose rather for others to gain.
Will Copyright Law Favor Artificial Intelligence End Users?
In 2022, an AI-generated work of art won the Colorado State Fair’s art competition. The artist, Jason Allen, had used Midjourney – a generative AI system trained on art scraped from the internet – to create the piece. The process was far from fully automated: Allen went through some 900 iterations over 80 hours to create and refine his submission.
Yet his use of AI to win the art competition triggered a heated backlash online, with one Twitter user claiming, “We’re watching the death of artistry unfold right before our eyes.”
As generative AI art tools like Midjourney and Stable Diffusion have been thrust into the limelight, so too have questions about ownership and authorship.
These tools’ generative ability is the result of training them with scores of prior artworks, from which the AI learns how to create artistic outputs.
Should the artists whose art was scraped to train the models be compensated? Who owns the images that AI systems produce? Is the process of fine-tuning prompts for generative AI a form of authentic creative expression?
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Robert Mahari, JD-PhD Student, Massachusetts Institute of Technology (MIT), Jessica Fjeld, Lecturer on Law, Harvard Law School, and Ziv Epstein, PhD Student in Media Arts and Sciences, Massachusetts Institute of Technology (MIT).
On one hand, technophiles rave over work like Allen’s. But on the other, many working artists consider the use of their art to train AI to be exploitative.
We’re part of a team of 14 experts across disciplines that just published a paper on generative AI in Science magazine. In it, we explore how advances in AI will affect creative work, aesthetics and the media. One of the key questions that emerged has to do with U.S. copyright laws, and whether they can adequately deal with the unique challenges of generative AI.
Copyright laws were created to promote the arts and creative thinking. But the rise of generative AI has complicated existing notions of authorship.
Photography Serves as a Helpful Lens
Generative AI might seem unprecedented, but history can act as a guide.
Take the emergence of photography in the 1800s. Before its invention, artists could only try to portray the world through drawing, painting or sculpture. Suddenly, reality could be captured in a flash using a camera and chemicals.
As with generative AI, many argued that photography lacked artistic merit. In 1884, the U.S. Supreme Court weighed in on the issue and found that cameras served as tools that an artist could use to give an idea visible form; the “masterminds” behind the cameras, the court ruled, should own the photographs they create.
From then on, photography evolved into its own art form and even sparked new abstract artistic movements.
AI Can’t Own Outputs
Unlike inanimate cameras, AI possesses capabilities – like the ability to convert basic instructions into impressive artistic works – that make it prone to anthropomorphization. Even the term “artificial intelligence” encourages people to think that these systems have humanlike intent or even self-awareness.
This led some people to wonder whether AI systems can be “owners.” But the U.S. Copyright Office has stated unequivocally that only humans can hold copyrights.
So who can claim ownership of images produced by AI? Is it the artists whose images were used to train the systems? The users who type in prompts to create images? Or the people who build the AI systems?
Infringement or Fair Use?
While artists draw obliquely from past works that have educated and inspired them in order to create, generative AI relies on training data to produce outputs.
This training data consists of prior artworks, many of which are protected by copyright law and which have been collected without artists’ knowledge or consent. Using art in this way might violate copyright law even before the AI generates a new work.
For Jason Allen to create his award-winning art, Midjourney was trained on 100 million prior works.
Was that a form of infringement? Or was it a new form of “fair use,” a legal doctrine that permits the unlicensed use of protected works if they’re sufficiently transformed into something new?
While AI systems do not contain literal copies of the training data, they do sometimes manage to recreate works from the training data, complicating this legal analysis.
Will contemporary copyright law favor end users and companies over the artists whose content is in the training data?
To mitigate this concern, some scholars propose new regulations to protect and compensate artists whose work is used for training. These proposals include a right for artists to opt out of their data’s being used for generative AI or a way to automatically compensate artists when their work is used to train an AI.
Muddled Ownership
Training data, however, is only part of the process. Frequently, artists who use generative AI tools go through many rounds of revision to refine their prompts, which suggests a degree of originality.
Answering the question of who should own the outputs requires looking into the contributions of all those involved in the generative AI supply chain.
The legal analysis is easier when an output is different from works in the training data. In this case, whoever prompted the AI to produce the output appears to be the default owner.
However, copyright law requires meaningful creative input – a standard satisfied by clicking the shutter button on a camera. It remains unclear how courts will decide what this means for the use of generative AI. Is composing and refining a prompt enough?
Matters are more complicated when outputs resemble works in the training data. If the resemblance is based only on general style or content, it is unlikely to violate copyright, because style is not copyrightable.
The illustrator Hollie Mengert encountered this issue firsthand when her unique style was mimicked by generative AI engines in a way that did not capture what, in her eyes, made her work unique. Meanwhile, the singer Grimes embraced the tech, “open-sourcing” her voice and encouraging fans to create songs in her style using generative AI.
If an output contains major elements from a work in the training data, it might infringe on that work’s copyright. Recently, the Supreme Court ruled that Andy Warhol’s drawing of a photograph was not permitted by fair use. That means that using AI to just change the style of a work – say, from a photo to an illustration – is not enough to claim ownership over the modified output.
While copyright law tends to favor an all-or-nothing approach, scholars at Harvard Law School have proposed new models of joint ownership that allow artists to gain some rights in outputs that resemble their works.
In many ways, generative AI is yet another creative tool that allows a new group of people access to image-making, just like cameras, paintbrushes or Adobe Photoshop. But a key difference is this new set of tools relies explicitly on training data, and therefore creative contributions cannot easily be traced back to a single artist.
The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression.
New Projections for Copper Demand High, Price Seen as Still “Muted”
The copper market could see an “unprecedented” inflow in the coming years as investors seek to profit from the metal’s anticipated surge in value, driven by growing demand for electric vehicles (EVs) and renewable energy, according to Citigroup.
In an interview with Bloomberg last week, Max Layton, Citi’s managing director for commodities research, said he believes now is an ideal time for investors to buy, as the price of copper is still muted on global recession concerns. The red metal is currently trading around $8,300 a ton, down approximately 26% from its all-time high of nearly $11,300, set in October 2021.
According to Layton, copper could top out at $15,000 a ton by 2025, a jump that would “make oil’s 2008 bull run look like child’s play.”
Citi also pointed out that copper may dip further in the short-term but could begin to rally in the next six to 12 months as the market fully recognizes the massive imbalance between supply and demand, a gap that’s expected to widen as demand for EVs and renewables expands.
This article was republished with permission from Frank Talk, a CEO Blog by Frank Holmes of U.S. Global Investors (GROW).
Find more of Frank’s articles here – Originally published June 12, 2023.
Internal Combustion Vehicle Sales Set To Peak This Decade: BloombergNEF
As I’ve mentioned before, electric vehicles (EVs) require up to three times more copper compared to traditional internal combustion engine (ICE) vehicles. This presents a challenge because the number of newly discovered copper deposits is decreasing, and the time it takes to go from discovery to production has been increasing due to rising costs. According to S&P Global, out of the 224 copper deposits found between 1990 and 2019, only 16 have been discovered in the last decade.
Meanwhile, EV sales continue to rise. Last year, these sales reached a total of 10.5 million, and projections by Bloomberg New Energy Finance (NEF) suggest that they could escalate to around 27 million by 2026. Bloomberg predicts that the global fleet of ICE vehicles will peak in as little as two years, after which the market will be dominated primarily by EVs and, to a lesser extent, hybrids. By 2030, EVs might constitute 44% of all passenger vehicle sales, and by 2040, three could account for three quarters of all vehicle sales.
Tesla Stock Supported By String Of Positive News
Tesla, which remains the world’s largest EV manufacturer, has seen its stock increase over 100% year-to-date in 2023, making it the third best performer in the S&P 500, following NVIDIA (+166%) and Meta (120%). In fact, shares of Tesla have now fully recovered (and then some) from October 2022, when CEO Elon Musk purchased Twitter for $44 billion. This raised concerns among investors about Musk’s ability to run the EV manufacturer while taking on a new, time-intensive project, not to mention also juggling SpaceX.
Friday marked the 12th straight day that shares of Tesla have advanced, representing a remarkable winning streak that we haven’t seen since January 2021.
The Austin-based carmaker got a huge boost last week after it announced that its popular Model 3 now qualifies for a $7,500 EV consumer tax credit. This action means that in California, which applies its own $7,500 tax rebate for EV purchases, a brand new Tesla Model S is cheaper than a Toyota Camry.
To qualify for the U.S. tax credit, Tesla had to make changes to how it sourced materials for its batteries in accordance with the Inflation Reduction Act (IRA), signed into law in August 2022. The IRA stipulates that 40% of electric vehicle battery materials and components must be extracted or processed in the U.S. or in a country that has a free trade agreement with the U.S. This manufacturing threshold will increase annually, and by 2027, 80% of the battery must be produced in the U.S. or a partner country to qualify for the full rebate.
Tesla stock also benefited from last Thursday’s announcement that drivers of EVs made by rival General Motors (GM) would be able to use Tesla’s North American supercharger network starting next year. The deal not only gives GM customers access to an additional 12,000 charging stations across the continent, but it also vastly increases Tesla’s market share of the essential charging infrastructure.
Musk’s Copper Quest
Thinking ahead, Musk reportedly met virtually last month with L. Oyun-Erdene, prime minister of Mongolia. The details of their discussion were not fully disclosed, but it’s worth pointing out that Mongolia is a copper-rich country, home to the world’s fourth-largest copper mine, operated jointly by Rio Tinto and the Mongolian government. In May, Rio Tinto announced that production had finally begun at the mine, which sits 1.3 kilometers (0.8 miles) below the Gobi Desert.
With access to this copper, perhaps Tesla is planning to build a metals processing plant in Mongolia? This would make sense, as the company maintains a factory in Shanghai, China.
US Global Investors Disclaimer
Holdings may change daily. Holdings are reported as of the most recent quarter-end. The following securities mentioned in the article were held by one or more accounts managed by U.S. Global Investors as of (03/31/2023): Tesla Inc.
All opinions expressed and data provided are subject to change without notice. Some of these opinions may not be appropriate to every investor. By clicking the link(s) above, you will be directed to a third-party website(s). U.S. Global Investors does not endorse all information supplied by this/these website(s) and is not responsible for its/their content.
Brain Tumors are Cognitive Parasites – How Brain Cancer Hijacks Neural Circuits and Causes Cognitive Decline
Researchers have long known that brain tumors, specifically a type of tumor called a glioma, can affect a person’s cognitive and physical function. Patients with glioblastoma, the most fatal type of brain tumor in adults, experience an especially drastic decline in quality of life. Glioblastomas are thought to impair normal brain functions by compressing and causing healthy tissue to swell, or competing with them for blood supply.
What exactly causes cognitive decline in brain tumor patients is still unknown. In our recently published research, we found that tumors can not only remodel neural circuits, but that brain activity itself can fuel tumor growth.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Saritha Krishna, Postdoctoral Fellow in Neurological Surgery, University of California, San Francisco, and Shawn Hervey-Jumper, Associate Professor of Neurological Surgery, University of California, San Francisco.
We are a neuroscientist and neurosurgeon team at the University of California, San Francisco. Our work focuses on understanding how brain tumors remodel neuronal circuits and how these changes affect language, motor and cognitive function. We discovered a previously unknown mechanism brain tumors use to hijack and modify brain circuitry that causes cognitive decline in patients with glioma.
Brain Tumors in Dialogue with Surrounding Cells
When we started this study, scientists had recently found that a self-perpetuating positive feedback loop powers brain tumors. The cycle begins when cancer cells produce substances that can act as neurotransmitters, proteins that help neurons communicate with each other. This surplus of neurotransmitters triggers neurons to become hyperactive and secrete chemicals that stimulate and accelerate the proliferation and growth of the cancer cells.
We wondered how this feedback loop affects the behavior and cognition of people with brain cancer. To study how glioblastomas engage with neuronal circuits in the human brain, we recorded the real-time brain activity of patients with gliomas as they were shown pictures of common objects or animals and asked to name what they depicted while they were undergoing brain surgery to remove the tumor.
While the patients engaged in these tasks, the language networks in their brains were activated as expected. However, we found that the brain regions the tumors had infiltrated quite remote from known language zones of the brain were also activated during these tasks. This unexpected finding shows that tumors can hijack and restructure connections in the brain tissue surrounding them and increase their activity.
This may account for the cognitive decline frequently associated with the progression of gliomas. However, by directly recording the electrical activity of the brain using electrocorticography, we showed that despite being hyperactive, these remote brain regions had significantly reduced computational power. This was especially the case for processing more complex, less commonly used words, such as “rooster,” in comparison with simple, more commonly used words, such as “car.” This meant that brain cells entangled in the tumor are so compromised that they need to recruit additional cells to carry out tasks previously controlled by a smaller defined area.
We make an analogy to an orchestra. The musicians need to play in synchrony for the music to work. When you lose the cellos and the woodwinds, the remaining musicians can’t deliver the piece as effectively as when all players are present. Similarly, when brain tumors hijack the areas surrounding it, the brain is less able to effectively function.
Gabapentin as a Promising Drug for Glioblastoma
Now we understood that tumors can impair cognition by affecting neural connections. Next, we further examined their connections with each other and with healthy neurons using mouse models and brain organoids, which are clusters of brain cells grown in a Petri dish.
These experiments, led by one of us, Saritha Krishna, found that tumor cells secrete a protein called thrombospondin-1 that plays a key role in promoting the hyperactivity of brain cells. We wondered whether blocking this protein, which normally helps neurons form synapses, would halt tumor growth and extend the survival of mice with glioblastoma.
To test this hypothesis, we treated mice with a common anti-seizure drug called gabapentin that blocks thrombospondin-1. We found that gabapentin was able to keep the brain tumors from expanding for several months. These findings highlight the potential of repurposing this existing drug to control brain tumor growth.
Our study suggests that targeting the communication between healthy brain cells and cancer cells could offer another way to treat brain cancer. Combining gabapentin with other conventional therapies could complement existing treatments, helping mitigate cognitive decline and potentially improving survival. We are now exploring new ways to take advantage of this drug’s potential to halt tumor growth. Our goal is to ultimately translate the findings of our study to clinical trials in people.
How Can Congress Regulate AI? Erect Guardrails, Ensure Accountability and Address Monopolistic Power
OpenAI CEO Sam Altman urged lawmakers to consider regulating AI during his Senate testimony on May 16, 2023. That recommendation raises the question of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting. But what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks.
Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Anjana Susarla, Professor of Information Systems, Michigan State University.
As a researcher who studies social media and artificial intelligence, I believe that Altman’s suggestions have highlighted important issues but don’t provide answers in and of themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies’ economic power and political sway.
An Agency to Regulate AI?
Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman’s testimony. The European Union’s AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example.
The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks.
Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on some of the risks inherent in AI. The Consumer Product Safety Commission and other agencies have a role to play as well.
Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy.
Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers today.
Licensing Auditors, Not Companies
Though OpenAI’s Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with humanlike intelligence that could pose a threat to humanity. That would be akin to companies being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a role to play well before such a futuristic scenario comes to pass.
Algorithmic auditing would require credentialing, standards of practice and extensive training. Requiring accountability is not just a matter of licensing individuals but also requires companywide standards and practices.
Experts on AI fairness contend that issues of bias and fairness in AI cannot be addressed by technical methods alone but require more comprehensive risk mitigation practices such as adopting institutional review boards for AI. Institutional review boards in the medical field help uphold individual rights, for example.
Academic bodies and professional societies have likewise adopted standards for responsible use of AI, whether it is authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.
Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It’s also important to recognize that greater data accountability and transparency may impose new restrictions on organizations.
Scholars of data privacy and AI ethics have called for “technological due process” and frameworks to recognize harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance and health care calls for licensing and audit requirements to ensure procedural fairness and privacy safeguards.
Requiring such accountability provisions, though, demands a robust debate among AI developers, policymakers and those who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the appearance of compliance.
AI Monopolies?
What was also missing in Altman’s testimony is the extent of investment required to train large-scale AI models, whether it is GPT-4, which is one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for developing the world’s largest language models.
Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale.
It is also important to acknowledge that the training data for tools such as ChatGPT includes the intellectual labor of a host of people such as Wikipedia contributors, bloggers and authors of digitized books. The economic benefits from these tools, however, accrue only to the technology corporations.
Proving technology firms’ monopoly power can be difficult, as the Department of Justice’s antitrust case against Microsoft demonstrated. I believe that the most feasible regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for AI firms and users of AI alike, to urge comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.
Understanding the Molecular Pathways of How Opioids Work
Your body naturally produces opioids without causing addiction or overdose – studying how this process works could help reduce the side effects of opioid drugs. Opioids such as morphine and fentanyl are like the two-faced Roman god Janus: The kindly face delivers pain relief to millions of sufferers, while the grim face drives an opioid abuse and overdose crisis that claimed nearly 70,000 lives in the U.S. in 2020 alone.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, John Michael Streicher,Associate Professor of Pharmacology, University of Arizona.
Scientists like me who study pain and opioids have been seeking a way to separate these two seemingly inseparable faces of opioids. Researchers are trying to design drugs that deliver effective pain relief without the risk of side effects, including addiction and overdose.
One possible path to achieving that goal lies in understanding the molecular pathways opioids use to carry out their effects in your body.
How Do Opioids Work
The opioid system in your body is a set of neurotransmitters your brain naturally produces that enable communication between neurons and activate protein receptors. These neurotransmitters include small protein-like molecules like enkephalins and endorphins. These molecules regulate a tremendous number of functions in your body, including pain, pleasure, memory, the movements of your digestive system and more.
Opioid neurotransmitters activate receptors that are located in a lot of places in your body, including pain centers in your spinal cord and brain, reward and pleasure centers in your brain, and throughout the neurons in your gut. Normally, opioid neurotransmitters are released in only small quantities in these exact locations, so your body can use this system in a balanced way to regulate itself.
The problem comes when you take an opioid drug like morphine or fentanyl, especially at high doses for a long time. These drugs travel through the bloodstream and can activate every opioid receptor in your body. You’ll get pain relief through the pain centers in your spinal cord and brain. But you’ll also get a euphoric high when those drugs hit your brain’s reward and pleasure centers, and that could lead to addiction with repeated use. When the drug hits your gut, you may develop constipation, along with other common opioid side effects.
Targeting Opioid Signal Transduction
How can scientists design opioid drugs that won’t cause side effects?
One approach my research team and I take is to understand how cells respond when they receive the message from an opioid neurotransmitter. Neuroscientists call this process opioid receptor signal transduction. Just as neurotransmitters are a communication network within your brain, each neuron also has a communication network that connects receptors to proteins within the neuron. When these connections are made, they trigger specific effects like pain relief. So, after a natural opioid neurotransmitter or a synthetic opioid drug activates an opioid receptor, it activates proteins within the cell that carry out the effects of the neurotransmitter or the drug.
Opioid signal transduction is complex, and scientists are just starting to figure out how it works. However, one thing is clear: Not every protein involved in this process does the same thing. Some are more important for pain relief, while some are more important for side effects like respiratory depression, or the decrease in breathing rate that makes overdoses fatal.
So what if we target the “good” signals like pain relief, and avoid the “bad” signals that lead to addiction and death? Researchers are tackling this idea in different ways. In fact, in 2020 the U.S. Food and Drug Administration approved the first opioid drug based on this idea, oliceridine, as a painkiller with fewer respiratory side effects.
However, relying on just one drug has downsides. That drug might not work well for all people or for all types of pain. It could also have other side effects that show up only later on. Plenty of options are needed to treat all patients in need.
My research team is targeting a protein called Heat shock protein 90, or Hsp90, which has many functions inside each cell. Hsp90 has been a hot target in the cancer field for years, with researchers developing Hsp90 inhibitors as a treatment for many cancer types.
We’ve found that Hsp90 is also really important in regulating opioid signal transduction. Blocking Hsp90 in the brain blocked opioid pain relief. However, blocking Hsp90 in the spinal cord increased opioid pain relief. Our recently published work uncovered more details on exactly how inhibiting Hsp90 leads to increased pain relief in the spinal cord.
Our work shows that manipulating opioid signaling through Hsp90 offers a path forward to improve opioid drugs. Taking an Hsp90 inhibitor that targets the spinal cord along with an opioid drug could improve the pain relief the opioid provides while decreasing its side effects. With improved pain relief, you can take less opioid and reduce your risk of addiction. We are currently developing a new generation of Hsp90 inhibitors that could help realize this goal.
There may be many paths to developing an improved opioid drug without the burdensome side effects of current drugs like morphine and fentanyl. Separating the kindly and grim faces of the opioid Janus could help provide pain relief we need without addiction and overdose.
War Rooms and Bailouts: How Banks and the Fed are Preparing for a US Default – and the Chaos Expected to Follow
When you are the CEO responsible for a bank and all the related depositors and investors, you don’t take an “it’ll never happen” approach to the possibility of a U.S. debt default. The odds are it won’t happen, but if it does, being unprepared would be devastating. Banks of all sizes are getting their doomsday plans in place, and other industries are as well, but big banks, on many fronts would be most directly impacted. The following is an informative article on how banks are preparing. It’s authored by John W. Diamond the Director of the Center for Public Finance at the Baker Institute, Rice University, and republished with permission from The Conversation. – Paul Hoffman, Managing Editor, Channelchek
Convening war rooms, planning speedy bailouts and raising house-on-fire alarm bells: Those are a few of the ways the biggest banks and financial regulators are preparing for a potential default on U.S. debt.
“You hope it doesn’t happen, but hope is not a strategy – so you prepare for it,” Brian Moynihan, CEO of Bank of America, the nation’s second-biggest lender, said in a television interview.
The doomsday planning is a reaction to a lack of progress in talks between President Joe Biden and House Republicans over raising the US$31.4 trillion debt ceiling – another round of negotiations took place on May 16, 2023. Without an increase in the debt limit, the U.S. can’t borrow more money to cover its bills – all of which have already been agreed to by Congress – and in practical terms that means a default.
What happens if a default occurs is an open question, but economists – including me – generally expect financial chaos as access to credit dries up and borrowing costs rise quickly for companies and consumers. A severe and prolonged global economic recession would be all but guaranteed, and the reputation of the U.S. and the dollar as beacons of stability and safety would be further tarnished.
But how do you prepare for an event that many expect would trigger the worst global recession since the 1930s?
Preparing for Panic
Jamie Dimon, who runs JPMorgan Chase, the biggest U.S. bank, told Bloomberg he’s been convening a weekly war room to discuss a potential default and how the bank should respond. The meetings are likely to become more frequent as June 1 – the date on which the U.S. might run out of cash – nears.
Dimon described the wide range of economic and financial effects that the group must consider such as the impact on “contracts, collateral, clearing houses, clients” – basically every corner of the financial system – at home and abroad.
“I don’t think it’s going to happen — because it gets catastrophic, and the closer you get to it, you will have panic,” he said.
That’s when rational decision-making gives way to fear and irrationality. Markets overtaken by these emotions are chaotic and leave lasting economic scars.
Banks haven’t revealed many of the details of how they are responding, but we can glean some clues from how they’ve reacted to past crises, such as the financial crisis in 2008 or the debt ceiling showdowns of 2011 and 2013.
One important way banks can prepare is by reducing exposure to Treasury securities – some or all of which could be considered to be in default once the U.S. exhausts its ability to pay all of its bill. All U.S. debts are referred to as Treasury bills or bonds.
The value of Treasurys is likely to plunge in the case of a default, which could weaken bank balance sheets even more. The recent bank crisis, in fact, was prompted primarily by a drop in the market value of Treasurys due to the sharp rise in interest rates over the past year. And a default would only make that problem worse, with close to 190 banks at risk of failure as of March 2023.
Another strategy banks can use to hedge their exposure to a sell-off in Treasurys is to buy credit default swaps, financial instruments that allow an investor to offset credit risk. Data suggests this is already happening, as the cost to protect U.S. government debt from default is higher than that of Brazil, Greece and Mexico, all of which have defaulted multiple times and have much lower credit ratings.
But buying credit default swaps at ever-higher prices limits a third key preventive measure for banks: keeping their cash balances as high as possible so they’re able and ready to deal with whatever happens in a default.
Keeping the Financial Plumbing Working
Financial industry groups and financial regulators have also gamed out a potential default with an eye toward keeping the financial system running as best they can.
The Securities Industry and Financial Markets Association, for example, has been updating its playbook to dictate how players in the Treasurys market will communicate in case of a default.
And the Federal Reserve, which is broadly responsible for ensuring financial stability, has been pondering a U.S. default for over a decade. One such instance came in 2013, when Republicans demanded the elimination of the Affordable Care Act in exchange for raising the debt ceiling. Ultimately, Republicans capitulated and raised the limit one day before the U.S. was expected to run out of cash.
One of the biggest concerns Fed officials had at the time, according to a meeting transcript recently made public, is that the U.S. Treasury would no longer be able to access financial markets to “roll over” maturing debt. While hitting the current ceiling prevents the U.S. from issuing new debt that exceeds $31.4 trillion, the government still has to roll existing debt into new debt as it comes due. On May 15, 2023, for example, the government issued just under $100 billion in notes and bonds to replace maturing debt and raise cash.
The risk is that there would be too few buyers at one of the government’s daily debt auctions – at which investors from around the world bid to buy Treasury bills and bonds. If that happens, the government would have to use its cash on hand to pay back investors who hold maturing debt.
That would further reduce the amount of cash available for Social Security payments, federal employees wages and countless other items the government spent over $6 trillion on in 2022. This would be nothing short of apocalyptic if the Fed could not save the day.
To mitigate that risk, the Fed said it could immediately step in as a buyer of last resort for Treasurys, quickly lower its lending rates and provide whatever funding is needed in an attempt to prevent financial contagion and collapse. The Fed is likely having the same conversations and preparing similar actions today.
A Self-Imposed Catastrophe
Ultimately, I hope that Congress does what it has done in every previous debt ceiling scare: raise the limit.
These contentious debates over lifting it have become too commonplace, even as lawmakers on both sides of the aisle express concerns about the growing federal debt and the need to rein in government spending. Even when these debates result in some bipartisan effort to rein in spending, as they did in 2011, history shows they fail, as energy analyst Autumn Engebretson and I recently explained in a review of that episode.
That’s why one of the most important ways banks are preparing for such an outcome is by speaking out about the serious damage not raising the ceiling is likely to inflict on not only their companies but everyone else, too. This increases the pressure on political leaders to reach a deal.
Going back to my original question, how do you prepare for such a self-imposed catastrophe? The answer is, no one should have to.
Is a U.S. Default or Bankruptcy Possible – How Would that Work?
It seems no one is talking about what would happen if the U.S. defaulted on maturing debt, yet it is within the realm of possibilities. Also not impossible is the idea of the powerful country joining the list of sovereign nations that once declared bankruptcy and survived. A retired government employee with a passion for economic history wrote a timely piece on this subject. It was originally published on the Mises Institute website on May 12, 2023. Channelchek has shared it here with permission.
The current known federal debt is $31.7 trillion, according to the website, U.S. Debt Clock, this is about $94,726 for every man, woman, and child who are citizens as of April 24, 2023. Can you write a check right now made payable to the United States Treasury for the known share of the federal debt of each member of your family after liquidating the assets you own?
A report released by the St. Louis Federal Reserve Branch on March 6, 2023, stated a similar figure for the total known federal debt of about $31.4 trillion as of December 31, 2022. The federal debt size is so great, it can never be repaid in its current form.
Some of us have been in or known families or businesses who had financial debt that could not be paid when adjustments like reducing expenses, increasing income, renegotiating loan repayments to lender(s), and selling assets to raise money for loan repayment were not enough. The reality is that they still could not pay the debt owed to the lender(s).
This leads to filing bankruptcy under federal bankruptcy laws overseen by a federal bankruptcy court.
Chapter 7 bankruptcy is a liquidation proceeding available to consumers and businesses. It allows for assets of a debtor that are not exempt from creditors to be collected and liquidated (turned to cash), and the proceeds distributed to creditors. A consumer debtor receives a complete discharge from debt under Chapter 7, except for certain debts that are prohibited from discharge by the Bankruptcy Code.
Chapter 11 bankruptcy provides a procedure by which an individual or a business can reorganize its debts while continuing to operate. The vast majority of Chapter 11 cases are filed by businesses. The debtor, often with participation from creditors, creates a plan of reorganization under which to repay part or all its debts.
These government entities have filed for Chapter 9 federal bankruptcy:
Orange County, California, in 1994 for about $1.7 billion
Jefferson County, Alabama, in 2011 for about $5 billion
The City of Detroit, Michigan, in 2013 for about $18 billion
The Commonwealth of Puerto Rico in 2017 for $72 billion
According to the United States Courts website:
The purpose of Chapter 9 is to provide a financially-distressed municipality protection from its creditors while it develops and negotiates a plan for adjusting its debts. Reorganization of the debts of a municipality is typically accomplished either by extending debt maturities, reducing the amount of principal or interest, or refinancing the debt by obtaining a new loan.
Although similar to other Chapters in some respects, Chapter 9 is significantly different in that there is no provision in the law for liquidation of the assets of the municipality and distribution of the proceeds to creditors.
The bankruptcies of two counties, a major city, and a sovereign territory resulted in bondholders with financial losses not repaid in full as well as reforms enacted in each governmental entity. Each one emerged from bankruptcy, one hopes, humbled and better able to manage their finances.
The federal government’s best solution for bondholders, taxpayers, and other interested parties is to default, declare sovereign bankruptcy, and make the required changes to get the fiscal business in order. Default, as defined by Dictionary.com as a verb, is “to fail to meet financial obligations or to account properly for money in one’s care.”
Sovereign government defaults are not new in our lifetime with Argentina in 1989, 2001, 2014, and 2020; South Korea, Indonesia, and Thailand in 1997, known as the Asian flu; Greece in 2009; and Russia in 1998.
Possible Outcomes
Some outcomes from these defaults lead to sovereign government debt bond ratings being reduced by the private rating agencies, bondholders losing value on their holdings, debt repayments being renegotiated with lenders, many countries receiving loans with a repayment plan from the International Monetary Fund (IMF), reforms being required to nations’ entitlement programs, a number of government taxes being raised, their currency losing value on currency trading exchanges, price inflation becoming more of a reality to its citizens, and higher interest rates being offered on future government debt bond offerings.
Very few in the financial world are talking about any outcomes of a U.S. federal government debt default. One outcome from the 2011 near default was Standard & Poor’s lowering their AAA federal bond rating to AA+ where it has remained.
What organization would oversee the execution of a U.S. federal government debt default, and what authorization would they be given to deal with the situation? No suggestions are offered when its scale is numerically mind-numbing since the U.S. has used debt as its drug of choice to overdose on fiscal reality.
Some outcomes would include a lowered federal bond rating by the three private bond rating agencies, where the reality of higher interest rates being offered on newly issued federal debt cannot be ignored. Federal government spending cuts in some form will be required by the realities of economic law, which includes reducing the number of federal employees, abolishing federal agencies, reducing and reforming military budgets, selling federal government property, delegating federal programs to the states, and reforming the federal entitlement programs of Medicaid, Medicare, and Social Security. Federal government tax revenue to repay the known debt with interest will rise as a percentage of each year’s future federal budget.
One real impact from a federal government debt default would be that the U.S. dollar would no longer be the global reserve currency, with dollars in many national reserve banks coming back to the U.S. Holding dollars will be like holding a hot potato. Nations holding federal debt paper—like China ($859 billion), Great Britain ($668 billion), Japan ($1.11 trillion), and others as of the January 2023 numbers published by the U.S. Treasury—as well as many mutual funds and others will see their holdings reduced in value leading to a selling off of a magnitude one cannot imagine in scale and timing. Many mutual fund holders like retirees, city and state retirement systems, and 401(k) account holders will be impacted by this unfolding event.
The direction of an individual or business when they emerge from federal bankruptcy is hopefully humility—looking back with the perspective of mistakes made, learning from these mistakes, and moving forward with a focus to benefit their family and community.
However, cities, counties, and sovereign territories differ from individuals, families, and private businesses in emerging from federal bankruptcy. What the outcome of a federal government debt default will be is unknown. Yet its reality is before us.
About the Author:
Stephen Anderson is retired from state government service and is a graduate of The University of Texas at Austin. He currently lives in Texas. His passions are reading, writing, and helping friends and family understand economic history.
What is Hydrogen, and Can it Really Become a Climate Change Solution?
As the United States and other countries react to achieve a goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, a backup from more reliable energy sources would prevent blackouts and brownouts. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps. Can we stop using fossil fuels now? – Paul Hoffman, Managing Editor, Channelchek
Hydrogen, or H₂, is getting a lot of attention lately as governments in the U.S., Canada and Europe push to cut their greenhouse gas emissions.
But what exactly is H₂, and is it really a clean power source?
I specialize in researching and developing H₂ production techniques. Here are some key facts about this versatile chemical that could play a much larger role in our lives in the future.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Hannes van der Watt, Research Assistant Professor, University of North Dakota.
So, What is Hydrogen?
Hydrogen is the most abundant element in the universe, but because it’s so reactive, it isn’t found on its own in nature. Instead, it is typically bound to other atoms and molecules in water, natural gas, coal and even biological matter like plants and human bodies.
Hydrogen can be isolated, however. And on its own, the H₂ molecule packs a heavy punch as a highly effective energy carrier.
It is already used in industry to manufacture ammonia, methanol and steel and in refining crude oil. As a fuel, it can store energy and reduce emissions from vehicles, including buses and cargo ships.
Hydrogen can also be used to generate electricity with lower greenhouse gas emissions than coal or natural gas power plants. That potential is getting more attention as the U.S. government proposes new rules that would require existing power plants to cut their carbon dioxide emissions.
Because it can be stored, H₂ could help overcome intermittency issues associated with renewable power sources like wind and solar. It can also be blended with natural gas in existing power plants to reduce the plant’s emissions.
Using hydrogen in power plants can reduce carbon dioxide emissions when either blended or alone in specialized turbines, or in fuel cells, which consume H₂ and oxygen, or O₂, to produce electricity, heat and water. But it’s typically not entirely CO₂-free. That’s in part because isolating H₂ from water or natural gas takes a lot of energy.
How is Hydrogen Produced?
There are a few common ways to produce H₂:
Electrolysis can isolate hydrogen by splitting water – H₂O – into H₂ and O₂ using an electric current.
Methane reforming uses steam to split methane, or CH₄, into H₂ and CO₂. Oxygen and steam or CO₂ can also be used for this splitting process.
Gasification transforms hydrocarbon-based materials – including biomass, coal or even municipal waste – into synthesis gas, an H₂-rich gas that can be used as a fuel either on its own or as a precursor for producing chemicals and liquid fuels.
Each has benefits and drawbacks.
Green, Blue, Gray – What Do the Colors Mean?
Hydrogen is often described by colors to indicate how clean, or CO₂-free, it is. The cleanest is green hydrogen.
Green H₂ is produced using electrolysis powered by renewable energy sources, such as wind, solar or hydropower. While green hydrogen is completely CO₂-free, it is costly, at around US$4-$9 per kilogram ($2-$4 per pound) because of the high energy required to split water.
The largest share of hydrogen today is made from natural gas, meaning methane, which is a potent greenhouse gas. IRENA (2020), Green Hydrogen: A guide to policymaking
Other less energy-intensive techniques can produce H₂ at a lower cost, but they still emit greenhouse gases.
Gray H₂ is the most common type of hydrogen. It is made from natural gas through methane reforming. This process releases carbon dioxide into the atmosphere and costs around $1-$2.50 per kilogram (50 cents-$1 per pound).
If gray hydrogen’s CO₂ emissions are captured and locked away so they aren’t released into the atmosphere, it can become blue hydrogen. The costs are higher, at around $1.50-$3 per kilogram (70 cents-$1.50 per pound) to produce, and greenhouse gas emissions can still escape when the natural gas is produced and transported.
Another alternative is turquoise hydrogen, produced using both renewable and nonrenewable resources. Renewable resources provide clean energy to convert methane – CH₄ – into H₂ and solid carbon, rather than that carbon dioxide that must be captured and stored. This type of pyrolysis technology is still new, and is estimated to cost between $1.60 and $2.80 per kilogram (70 cents-$1.30 per pound).
Can We Switch Off the Lights on Fossil Fuels Now?
Over 95% of the H₂ produced in the U.S. today is gray hydrogen made with natural gas, which still emits greenhouse gases.
Whether H₂ can ramp up as a natural gas alternative for the power industry and other uses, such as for transportation, heating and industrial processes, will depend on the availability of low-cost renewable energy for electrolysis to generate green H₂.
It will also depend on the development and expansion of pipelines and other infrastructure to efficiently store, transport and dispense H₂.
Without the infrastructure, H₂ use won’t grow quickly. It’s a modern-day version of “Which came first, the chicken or the egg?” Continued use of fossil fuels for H₂ production could spur investment in H₂ infrastructure, but using fossil fuels releases greenhouse gases.
What Does the Future Hold for Hydrogen?
Although green and blue hydrogen projects are emerging, they are small so far.
Policies like Europe’s greenhouse gas emissions limits and the 2022 U.S. Inflation Reduction Act, which offers tax credits up to $3 per kilogram ($1.36 per pound) of H₂, could help make cleaner hydrogen more competitive.
Hydrogen demand is projected to increase up to two to four times its current level by 2050. For that to be green H₂ would require significant amounts of renewable energy at the same time that new solar, wind and other renewable energy power plants are being built to provide electricity directly to the power sector.
While green hydrogen is a promising trend, it is not the only solution to meeting the world’s energy needs and carbon-free energy goals. A combination of renewable energy sources and clean H₂, including blue, green or turquoise, will likely be necessary to meet the world’s energy needs in a sustainable way.
US Debt Default Could Trigger Dollar’s Collapse – and Severely Erode America’s Political and Economic Might
Congressional leaders at loggerheads over a debt ceiling impasse sat down with President Joe Biden on May 9, 2023, as the clock ticks down to a potentially catastrophic default if nothing is done by the end of the month.
Republicans, who regained control of the House of Representatives in November 2022, are threatening not to allow an increase in the debt limit unless they get spending cuts and regulatory rollbacks in return, which they outlined in a bill passed in April 2023. In so doing, they risk pushing the U.S. government into default.
It feels a lot like a case of déjà vu all over again.
Brinkmanship over the debt ceiling has become a regular ritual – it happened under the Clinton administration in 1995, then again with Barack Obama as president in 2011, and more recently in 2021.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Michael Humphries, Deputy Chair of Business Administration, Touro University.
As an economist, I know that defaulting on the national debt would have real-life consequences. Even the threat of pushing the U.S. into default has an economic impact. In August 2021, the mere prospect of a potential default led to an unprecedented downgrade of the the nation’s credit rating, hurting America’s financial prestige as well as countless individuals, including retirees.
And that was caused by the mere specter of default. An actual default would be far more damaging.
Dollar’s Collapse
Possibly the most serious consequence would be the collapse of the U.S. dollar and its replacement as global trade’s “unit of account.” That essentially means that it is widely used in global finance and trade.
Day to day, most Americans are likely unaware of the economic and political power that goes with being the world’s unit of account. Currently, more than half of world trade – from oil and gold to cars and smartphones – is in U.S. dollars, with the euro accounting for around 30% and all other currencies making up the balance.
As a result of this dominance, the U.S. is the only country on the planet that can pay its foreign debt in its own currency. This gives both the U.S. government and American companies tremendous leeway in international trade and finance.
No matter how much debt the U.S. government owes foreign investors, it can simply print the money needed to pay them back – although for economic reasons, it may not be wise to do so. Other countries must buy either the dollar or the euro to pay their foreign debt. And the only way for them to do so is to either to export more than they import or borrow more dollars or euros on the international market.
The U.S. is free from such constraints and can run up large trade deficits – that is, import more than it exports – for decades without the same consequences.
For American companies, the dominance of the dollar means they’re not as subject to the exchange rate risk as are their foreign competitors. Exchange rate risk refers to how changes in the relative value of currencies may affect a company’s profitability.
Since international trade is generally denominated in dollars, U.S. businesses can buy and sell in their own currency, something their foreign competitors cannot do as easily. As simple as this sounds, it gives American companies a tremendous competitive advantage.
If Republicans push the U.S. into default, the dollar would likely lose its position as the international unit of account, forcing the government and companies to pay their international bills in another currency.
Loss of Political Power Too
The dollar’s dominance means trade must go through an American bank at some point. This is one important way it gives the U.S. tremendous political power, especially to punish economic rivals and unfriendly governments.
For example, when former President Donald Trump imposed economic sanctions on Iran, he denied the country access to American banks and to the dollar. He also imposed secondary sanctions, which means that non-American companies trading with Iran were also sanctioned. Given a choice of access to the dollar or trading with Iran, most of the world economies chose access to the dollar and complied with the sanctions. As a result, Iran entered a deep recession, and its currency plummeted about 30%.
President Joe Biden did something similar against Russia in response to its invasion of Ukraine. Limiting Russia’s access to the dollar has helped push the country into a recession that’s bordering on a depression.
No other country today could unilaterally impose this level of economic pain on another country. And all an American president currently needs is a pen.
Rivals Rewarded
Another consequence of the dollar’s collapse would be enhancing the position of the U.S.‘s top rival for global influence: China.
While the euro would likely replace the dollar as the world’s primary unit of account, the Chinese yuan would move into second place.
If the yuan were to become a significant international unit of account, this would enhance China’s international position both economically and politically. As it is, China has been working with the other BRIC countries – Brazil, Russia and India – to accept the yuan as a unit of account. With the other three already resentful of U.S. economic and political dominance, a U.S. default would support that effort.
They may not be alone: Recently, Saudi Arabia suggested it was open to trading some of its oil in currencies other than the dollar – something that would change long-standing policy.
Severe Consequences
Beyond the impact on the dollar and the economic and political clout of the U.S., a default would be profoundly felt in many other ways and by countless people.
In the U.S., tens of millions of Americans and thousands of companies that depend on government support could suffer, and the economy would most likely sink into recession – or worse, given the U.S. is already expected to soon suffer a downturn. In addition, retirees could see the worth of their pensions dwindle.
The truth is, we really don’t know what will happen or how bad it will get. The scale of the damage caused by a U.S. default is hard to calculate in advance because it has never happened before.
But there’s one thing we can be certain of. If the threat of default is taken too far, the U.S. and Americans will suffer tremendously.