Will AI Allow for Better Service, Higher Profit?

Image Credit: KAZ Vorpal (Flickr)

Will AI Learn to Become a Better Entrepreneur than You?

Contemporary businesses use artificial intelligence (AI) tools to assist with operations and compete in the marketplace. AI enables firms and entrepreneurs to make data-driven decisions and to quicken the data-gathering process. When creating strategy, buying, selling, and increasing marketplace discovery, firms need to ask: What is better, artificial or human intelligence?

A recent article from the Harvard Business Review, “Can AI Help You Sell?,” stated, “Better algorithms lead to better service and greater success.” The attributes of the successful entrepreneur, such as calculated risk taking, dealing with uncertainty, keen sense for market signals, and adjusting to marketplace changes might be a thing of the past. Can AI take the place of the human entrepreneur? Would sophisticated artificial intelligence be able to spot market prices better, adjust to expectations better, and steer production toward the needs of consumers better than a human?

In one of my classes this semester, students and I discussed the role of AI, deep machine learning, and natural language processing (NLP) in driving many of the decisions and operations a human would otherwise provide within the firm. Of course, half of the class felt that the integration of some level of AI into many firms’ operations and resource management is beneficial in creating a competitive advantage.

However, the other half felt using AI will inevitably disable humans’ function in the market economy, resulting in less and less individualism. In other words, the firm will be overrun by AI. We can see that even younger college students are on the fence about whether AI will eliminate humans’ function in the market economy. We concluded as a class that AI and machine learning have their promises and shortcomings.

After class, I started thinking about the digital world of entrepreneurship. E-commerce demands the use of AI to reach customers, sell goods, produce goods, and host exchange—in conjunction with a human entrepreneur, of course.

However, AI—machine learning or deep machine learning—could also be tasked with creating a business-based model, examining the data on customers’ needs, designing a web page, and creating ads. Could AI adjust to market action and react to market uncertainty like a human? The answer may be a resounding yes! So, could AI eliminate the human entrepreneur?

Algorithm-XLab explains deep machine learning as something that “allows computers to solve complex problems. These systems can even handle diverse masses of unstructured data set.” Algorithm-XLab compared deep learning with human learning favorably, stating, “While a human can easily lose concentration, and possibly make a mistake, a robot won’t.”

This statement by Algorithm-XLab challenges the idea that trial and error leads to greater market knowledge and better enables entrepreneurs to provide consumers with what they are willing to buy. The statement also portrays the marketplace as a process where people have perfect knowledge and an equilibrium point, and it implies that humans do not have specialized knowledge of time and place.

The use of AI and its tools of deep learning and language processing do have their benefits from a technical standpoint. AI can determine how to produce hula hoops better, but can it determine whether to produce them or devote energy elsewhere? If entrepreneurs discover market opportunities, they must weigh the advantages and disadvantages of their potential actions. Will AI have the same entrepreneurial foresight?

The acquisition of market knowledge can take humans years to acquire; AI is much faster at it than humans would be. For example, the Allen Institute for AI is “working on systems that can take science tests, which require a knowledge of unstated facts and common sense that humans develop over the course of their lives.” The ability to process unstated, scattered facts is precisely the kind of characteristic we attribute to entrepreneurs. Processes, changes, and choices characterize the operation of the market, and the entrepreneur is at the center of this market function.

There is no doubt that contemporary firms use deep learning for strategy, operations, logistics, sales, and record keeping for human resources (HR) decision-making, according to a Bain & Company article titled “HR’s New Digital Mandate.” While focused on HR, the digital mandate does lend itself to questioning the use of entrepreneurial thinking and strategy conducted within a firm. After AI has learned how to operate a firm using robotic process automation and NLP capacities to their maximum, might it outstrip the human natural entrepreneurial abilities?

AI is used in everyday life, such as self-checkout at the grocery store, online shopping, social media interaction, dating apps, and virtual doctor appointments. Product delivery, financing, and development services increasingly involve an AI-as-a-service component. AI as a service minimizes the costs of gathering and processing customer insights, something usually associated with a team of human minds projecting key performance indicators aligned with an organizational strategy.

The human entrepreneur has a competitive advantage insofar as handling ambiguous customer feedback and in effect creating an entrepreneurial response and delivering satisfaction. We seek to determine whether AI has replaced human energy in some areas of life. Can AI understand human uneasiness or dissatisfaction, or the subjectivity of value felt by the consumer? AI can produce hula hoops, but can it articulate plans and gather the resources needed to produce them in the first place?.

In what, if any, entrepreneurial functions can AI outperform the human entrepreneur? The human entrepreneur is willing to take risks, adjust to the needs of consumers, pick up price signals, and understand customer choices. Could the human entrepreneur soon become an extinct class? If so, would machine learning and natural processing AI understand the differences between free and highly regulated markets? If so, which would it prefer, or which would it create?

Why Global Gold Demand Could Stay High

Central Banks Gobbled Up More Gold Last Year Than In Any Year Since 1967

The price of gold stopped just short of hitting $1,960 an ounce last Thursday, its highest level since April 2022, before plunging below $1,900 on Friday following a stronger-than-expected U.S. jobs report, indicating that the current rate hike cycle may be far from over.

I don’t believe that this takes away from the fact that gold posted its best start to the year since 2015. The yellow metal rose 5.72% in January, compared to 8.39% in the same month eight years ago.

This article was republished with permission from Frank Talk, a CEO Blog by Frank Holmes
of U.S. Global Investors (GROW).
Find more of Frank’s articles here – Originally published February 8, 2023.

I also maintain my bullishness for gold and gold mining stocks in 2023. Gold was one of the very few bright spots in a dismal 2022, ending the year essentially flat, and I expect its performance to remain strong in the year ahead.

Record Retail Demand In 2022

The big headline in the World Gold Council’s (WGC) 2022 review is that total global demand expanded 18% year-over-year, reaching its highest level since 2011.

Central banks were responsible for much of the growth, adding a massive 1,136 metric tons, the largest annual amount since 1967. China began accumulating again in 2022 for the first time in three years, continuing its goal of diversifying away from the dollar.

Meanwhile, retail demand for bars and coins in the U.S. and Europe hit a new annual record last year in response to stubbornly high inflation and the war in Ukraine. Western investors gobbled up 427 tons (approximately 15 million ounces), the most since 2011.

Investors To Shift From Physical Bullion To Gold-Backed ETFs In 2023?

Where I see the opportunity is with gold-backed ETFs and gold mining stocks, both of which didn’t see the same level of demand as the bullion market last year. Investors withdrew some 110 tons from physical gold ETFs, the second straight year of declines, though at a slower pace compared to 2021. Even when the gold price began to climb in November, investors didn’t seem to respond as they have in past rallies.   

The WGC suggests that demand for ETFs that hold physical gold will “take the baton” from bars and coins this year. That remains to be seen, but I always recommend that investors diversify, with 5% of their portfolio in bullion, gold jewelry and gold-backed ETFs.

Another 5% can be allocated to high-quality gold mining stocks, mutual funds and ETFs. We prefer companies that have demonstrated strong momentum in revenue, free cash flow and high-growth margins on a per-share basis.

$1 Trillion Investment In The Energy Transition, On Par With Fossil Fuels

If I had to select another metal to watch this year (and beyond), it would be copper. The red metal, we believe, will be one of the greatest beneficiaries of the global low-carbon energy transition that’s taking place. As we seek to electrify everything, from power generation to transportation, copper is the one material that’s used every step of the way.

What’s more, investment in the transition is accelerating. Last year, more than $1 trillion was plowed into new technologies such as renewable energy, energy storage, carbon capture and storage, electrified transport and more.

Not only is this a new annual record amount, but, for the first time ever, it matches what we invested in fossil fuels, according to Bloomberg New Energy Finance (NEF).    

China was the top investor, responsible for $546 billion, or nearly half of the total amount. The U.S. was a distant second at $141 billion, though the Inflation Reduction Act (IRA), signed into law in August 2022, has yet to be fully deployed.

Copper’s Supply-Demand Imbalance

At the same time that copper demand is growing due to the energy transition, the global supply pipeline is thinning due to shrinking exploration budgets and a dramatic slowdown in the number of new deposits.

Take a look at the chart above, courtesy of S&P Global. Copper exploration budgets have not managed to generate a meaningful increase in major new discoveries. According to S&P Global, most of the copper that’s produced every year comes from assets that were discovered in the 1990s.

It may be a good time to consider getting exposure with a high-quality copper miner such as Ivanhoe Mines or a broad-based commodities fund that gives you access to copper exploration and production.

Why NFL Players and Other Pros Need an Elite Money Management Team

Image Credit: Randychiu (Flickr)

Managing Money Is as Important as Making It: The Sad Case of Athletes Going Broke

Lacking a solid team is a recipe for organizational failure, and those intending to excel in business—or any other sector—must invest in management. Considering that many professional athletes encounter bankruptcy shortly after retiring, they are a demographic that could greatly benefit from quality financial management teams. Elite athletes earn millions of dollars during a short time, but few succeed at multiplying their earnings to create wealth. An investigation by the Global Financial Literacy Center found that 16 percent of National Football League (NFL) players declare bankruptcy within twelve years of retirement. Quite startling is that some athletes report bankruptcy as early as two years after retirement.

The results of the study also showed that NFL stars were just as likely to experience bankruptcy as other NFL players. Bankruptcy figures are equally daunting for basketball players. Research reveals that National Basketball Association (NBA) players who file for bankruptcy do so within 7.3 years after retirement, and 6.1 percent of all NBA players go bankrupt within fifteen years of exiting their profession. The emotional trauma of bankruptcy can lead to distress. Research indicates that 78 percent of NFL players experience financial distress two years after retirement.

Inept management of finances is the easiest strategy for losing wealth. Professional athletes can avert financial calamities by investing in a better management team. There is a stark difference between managing a junior athlete and managing a superstar who earns millions of dollars yearly. A professional who manages a junior athlete could be an excellent manager for a player at that stage, but the transition to elite status requires people with greater expertise.

In business, a manager should possess the relevant skills. They don’t have to be your friend. Elite athletes need elite managers to help them navigate stratospheric wealth. If a manager doesn’t have expertise in managing successful athletes or businesses, then he is unfit to manage an elite athlete. Athletes who succeed at expanding their empires are reluctant to rely on the services of amateurs.

Magic Johnson credits his success to investing in capable people rather than to the “wisdom” of family members and old friends. Pablo S. Torre paints Johnson as a serious businessman in a piece highlighting the failures of professional athletes:

Johnson started out by admitting he knew nothing about business and sought counsel from . . . men such as Hollywood agent Michael Ovitzand and Peter Guber. Now, Johnson says, he gets calls from star players “every day” . . . and cuts them short if they propose relying on family and friends.

Johnson’s strategy is even more relevant in light of the recent financial scandal involving the disappearance of over twelve million dollars held by sprinting legend Usain Bolt in Jamaican investment firm Stock and Securities Limited (SSL). Venting to reporters, Bolt’s attorney Linton Gordon argues that the Financial Services Commission (FSC) should be held liable for the mishap because the agency lapsed in providing proper oversight:

They should bear responsibility to some extent, if not entirely, because all along they kept quiet and did not alert the public, including Mr. Bolt, to the fact that the company was not operating in a way compliant with the law. It’s 10 years now they say they have been red flagging this company. Had he known that he would have withdrawn his money and he would not have lodged anymore.

Blaming the regulator is easy, but the debacle reveals deficits in Bolt’s management team. Usain Bolt did not need to know that SSL was deemed unsound years ago because his management team should have furnished him with that information. Some years ago, I was at an event where fellow investors argued that SSL was irredeemable. Bolt’s managers were out of the loop. Moreover, Jamaica is known for institutional weakness and fraud, so it’s a bit weird that a man of Bolt’s stature would have so much money stored in a Jamaican institution to begin with.

Some say that the FSC must be accountable for the misappropriation of Bolt’s money, but the FSC penned a report that Bolt’s managers would have seen if they were doing research. Moreover, in a country where agencies are frequently compromised by politics, there is a possibility that the FSC did not suspend the operations of SSL because it was constrained by rogue actors. Bolt’s managers should have shown some insight by recommending that the superstar limit his Jamaican investments and by soliciting the services of leading wealth management firms like UBS Wealth Management or Baird.

The case study of Usain Bolt demonstrates that even athletes with good managers should never hesitate to upgrade when their employees are not equipped for bigger challenges. Money is hard to make, but with a bad manager, it’s easy to lose. Therefore, athletes interested in keeping their money must invest in the right team or face the consequences.

About the Author:

Lipton Matthews is a researcher, business analyst, and contributor to Merion West, The Federalist, American Thinker, Intellectual Takeout, mises.org, and Imaginative Conservative. He may be contacted at lo_matthews@yahoo.com

Twitter Data Will Now Cost its Users

Image Credit: The Conversation (February 8, 2023)

Twitter’s New Data Fees Leave Scientists Scrambling for Funding – or Cutting Research

Twitter is ending free access to its application programming interface, or API. An API serves as a software “middleman” allowing two applications to talk to each other. An API is an accessible way to collect and share data within and across organizations. For example, researchers at universities unaffiliated with Twitter can collect tweets and other data from Twitter through their API.

Starting Feb. 9, 2023, those wanting access to Twitter’s API will have to pay. The company is looking for ways to increase revenue to reverse its financial slide, and Elon Musk claimed that the API has been abused by scammers. This cost is likely to hinder the research community that relies on the Twitter API as a data source.

The Twitter API launched in 2006, allowing those outside of Twitter access to tweets and corresponding metadata, information about each tweet such as who sent it and when and how many people liked and retweeted it. Tweets and metadata can be used to understand topics of conversation and how those conversations are “liked” and shared on the platform and by whom.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Jon-Patrick Allem, Assistant Professor of Research in Population and Public Health Sciences, University of Southern California.

As a scientist and director of a research lab focused on collecting and analyzing posts from social media platforms, I have relied on the Twitter API to collect tweets pertinent to public health for over a decade. My team has collected more than 80 million observations over the past decade, publishing dozens of papers on topics from adolescents’ use of e-cigarettes to misinformation about COVID-19.

Twitter has announced that it will allow bots that it deems provide beneficial content to continue unpaid access to the API, and that the company will offer a “paid basic tier,” but it’s unclear whether those will be helpful to researchers.

Blocking Out and Narrowing Down

Twitter is a social media platform that hosts interesting conversations across a variety of topics. As a result of free access to the Twitter API, researchers have followed these conversations to try to better understand public attitudes and behaviors. I’ve treated Twitter as a massive focus group where observations – tweets – can be collected in near real time at relatively low cost.

The Twitter API has allowed me and other researchers to study topics of importance to society. Fees are likely to narrow the field of researchers who can conduct this work, and narrow the scope of some projects that can continue. The Coalition for Independent Technology Research issued a statement calling on Twitter to maintain free access to its API for researchers. Charging for access to the API “will disrupt critical projects from thousands of journalists, academics and civil society actors worldwide who study some of the most important issues impacting our societies today,” the coalition wrote.

@SMLabTO (Twitter)

The financial burden will not affect all academics equally. Some scientists are positioned to cover research costs as they arise in the course of a study, even unexpected or unanticipated costs. In particular, scientists at large research-heavy institutions with grant budgets in the millions of dollars are likely to be able to cover this kind of charge.

However, many researchers will be unable to cover the as yet unspecified costs of the paid service because they work on fixed or limited budgets. For example, doctoral students who rely on the Twitter API for data for their dissertations may not have additional funding to cover this charge. Charging for access to the Twitter API will ultimately reduce the number of participants working to understand the world around us.

The terms of Twitter’s paid service will require me and other researchers to narrow the scope of our work, as pricing limits will make it too expensive to continue to collect as much data as we would like. As the amount of data requested goes up, the cost goes up.

We will be forced to forgo data collection on some topic areas. For example, we collect a lot of tobacco-related conversations, and people talk about tobacco by referencing the behavior – smoking or vaping – and also by referencing a product, like JUUL or Puff Bar. I add as many terms as I can think of to cast a wide net. If I’m going to be charged per word, it will force me to rethink how wide a net I cast. This will ultimately reduce our understanding of issues important to society.

Difficult Adjustments

Costs aside, many academic institutions are likely to have a difficult time adapting to these changes. For example, most universities are slow-moving bureaucracies with a lot of red tape. To enter into a financial relationship or complete a small purchase may take weeks or months. In the face of the impending Twitter API change, this will likely delay data collection and potential knowledge.

Unfortunately, everyone relying on the Twitter API for data was given little more than a week’s notice of the impending change. This short period has researchers scrambling as we try to prepare our data infrastructures for the changes ahead and make decisions about which topics to continue studying and which topics to abandon.

If the research community fails to properly prepare, scientists are likely to face gaps in data collection that will reduce the quality of our research. And in the end that means a loss of knowledge for the world.

How is the Economy Really Doing (Just the Facts)?

Image Credit: USA Facts

Do the Most Current Economic Measurements Suggest a Trend Toward Recession or Growth?

How is the US Economy Doing?

  • US GDP increased 2.1% in 2022 after increasing 5.9% in 2021.
  • Year-over-year inflation, the rate at which consumer prices increase, was 6.5% in December 2022.
  • The Federal Reserve raised interest rates seven times in 2022 and again on February 1, 2023 to curb inflation, increasing the target rate from near zero to 4.5-4.75%.
  • When accounting for inflation, workers’ average hourly earnings were down 1.7% in December 2022 compared to a year prior.
  • The ratio of unemployed people to job openings remained at or near record lows throughout 2022.
  • The unemployment rate was 4.0% at the beginning of 2022 and ended the year at 3.5%.
  • The labor force participation rate remains almost one percentage point below February 2020.
  • From January to November 2022, the US imported $889.9 billion more in goods and services than it exported. This is 7% higher than the trade deficit in 2021 for the same months.

US GDP increased 2.1% in 2022 after increasing 5.9% in 2021

Gross domestic product (GDP) fell in the first half of 2022 but grew in the second half. GDP reached $25.5 trillion in 2022.

US GDP

Year-over-year inflation, the rate at which consumer prices increase, was 6.5% in December 2022

That’s down from June 2022’s rate of 9.1%, the largest 12-month increase in 40 years. Inflation grew at the beginning of the year partly due to rising food and energy prices, while housing costs contributed throughout 2022.

CPI-U

The Federal Reserve raised interest rates seven times in 2022 and again on February 1, 2023 to curb inflation, increasing the target rate from near zero to 4.5-4.75%

Rate increases make it more expensive for banks to borrow from each other. Banks pass these costs on to consumers through increased interest rates. Read more about how the Federal Reserve tries to control inflation here.

Fed Funds Rate

When accounting for inflation, workers’ average hourly earnings were down 1.7% in December 2022 compared to a year prior

Inflation-adjusted average hourly earnings fell in all industries except information and leisure and hospitality, where earnings were flat.

Hourly Earnings

The ratio of unemployed people to job openings remained at or near record lows throughout 2022

In a typical month from March 2018 and February 2020, there were between 0.8 and 0.9 unemployed people per job opening. But after more than quadrupling in April 2020 at the onset of the pandemic, the ratio fell and settled from December 2021 to December 2022 to between 0.5 and 0.6 unemployed people per job opening, the lowest since data first became available in 2000.

Unemploymed Ratio

The unemployment rate was 4.0% at the beginning of 2022 and ended the year at 3.5%

It decreased most for Black and Asian people, 1.2 and 1.1 percentage points, respectively. Black people still have unemployment rates higher than the rest of the nation.

Categorized Unemployment Rate

The labor force participation rate remains almost one percentage point below February 2020

An additional 2.5 million workers would need to be in the labor force for the participation rate to reach its pre-pandemic level.

From January to November 2022, the US imported $889.9 billion more in goods and services than it exported. This is 7% higher than the trade deficit in 2021 for the same months

During this time, the goods trade deficit reached $1.1 trillion. Complete 2022 data is expected on February 7, 2023.

Trade Balance

This content was republished from USAFacts. USAFacts is a not-for-profit, nonpartisan civic initiative making government data easy for all Americans to access and understand. It provides accessible analysis on US spending and outcomes in order to ground public debates in facts.

Workday Commute and the Transition it Provides

Image Credit: Luis Zambrano (Flickr)

A Journey from Work to Home is about More than Just Getting There – the Psychological Benefits of Commuting that Remote Work Doesn’t Provide

For most American workers who commute, the trip to and from the office takes nearly one full hour a day – 26 minutes each way on average, with 7.7% of workers spending two hours or more on the road.

Many people think of commuting as a chore and a waste of time. However, during the remote work surge resulting from the COVID-19 pandemic, several journalists curiously noted that people were – could it be? – missing their commutes. One woman told The Washington Post that even though she was working from home, she regularly sat in her car in the driveway at the end of the workday in an attempt to carve out some personal time and mark the transition from work to nonwork roles.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Matthew Piszczek, Assistant Professor of Management, Wayne State University, Kristie McAlpine, Assistant Professor of Management, Rutgers University.

As management scholars who study the interface between peoples’ work and personal lives, we sought to understand what it was that people missed when their commutes suddenly disappeared.

In our recently published conceptual study, we argue that commutes are a source of “liminal space” – a time free of both home and work roles that provides an opportunity to recover from work and mentally switch gears to home.

During the shift to remote work, many people lost this built-in support for these important daily processes. Without the ability to mentally shift gears, people experience role blurring, which can lead to stress. Without mentally disengaging from work, people can experience burnout.

We believe the loss of this space helps explain why many people missed their commutes.

One of the more surprising discoveries during the pandemic has been that many people who switched to remote work actually missed their commutes. Gerald Streiter (Flickr)

Commutes and Liminal Space

In our study, we wanted to learn whether the commute provides that time and space, and what the effects are when it becomes unavailable.

We reviewed research on commuting, role transitions and work recovery to develop a model of a typical American worker’s commute liminal space. We focused our research on two cognitive processes: psychological detachment from the work role – mentally disengaging from the demands of work – and psychological recovery from work – rebuilding stores of mental energy used up during work.

Based on our review, we developed a model which shows that the liminal space created in the commute created opportunities for detachment and recovery.

However, we also found that day-to-day variations may affect whether this liminal space is accessible for detachment and recovery. For instance, train commuters must devote attention to selecting their route, monitoring arrivals or departures and ensuring they get off at the right stop, whereas car commuters must devote consistent attention to driving.

We found that, on the one hand, more attention to the act of commuting means less attention that could otherwise be put toward relaxing recovery activities like listening to music and podcasts. On the other hand, longer commutes might give people more time to detach and recover.

In an unpublished follow-up study we conducted ourselves, we examined a week of commutes of 80 university employees to test our conceptual model. The employees completed morning and evening surveys asking about the characteristics of their commutes, whether they “shut off” from work and relaxed during the commute and whether they felt emotionally exhausted when they got home.

Most of the workers in this study reported using the commute’s liminal space to both mentally transition from work to home roles and to start psychologically recovering from the demands of the workday. Our study also confirms that day-to-day variations in commutes predict the ability to do so.

We found that on days with longer-than-average commutes, people reported higher levels of psychological detachment from work and were more relaxed during the commute. However, on days when commutes were more stressful than usual, they reported less psychological detachment from work and less relaxation during the commute.

Creating Liminal Space

Our findings suggest that remote workers may benefit from creating their own form of commute to provide liminal space for recovery and transition – such as a 15-minute walk to mark the beginning and end of the workday.

Our preliminary findings align with related research suggesting that those who have returned to the workplace might benefit from seeking to use their commute to relax as much as possible.

To help enhance work detachment and relaxation during the commute, commuters could try to avoid ruminating about the workday and instead focus on personally fulfilling uses of the commute time, such as listening to music or podcasts, or calling a friend. Other forms of commuting such as public transit or carpooling may also provide opportunities to socialize.

Our data shows that commute stress detracts from detachment and relaxation during the commute more than a shorter or longer commute. So some people may find it worth their time to take the “scenic route” home in order to avoid tense driving situations.

Cultivating a Microbiome that Reduces the Incidence of Cancer

Image Credit: NIH (Flickr)

Microbes in Your Food Can Help or Hinder Your Body’s Defenses Against Cancer – How Diet Influences the Conflict Between Cell ‘Cooperators’ and ‘Cheaters’

The microbes living in your food can affect your risk of cancer. While some help your body fight cancer, others help tumors evolve and grow.

Gut microbes can influence your cancer risk by changing how your cells behave. Many cancer-protective microbes support normal, cooperative behavior of cells. Meanwhile, cancer-inducing microbes undermine cellular cooperation and increase your risk of cancer in the process.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Gissel Marquez Alcaraz, Ph.D. Student in Evolutionary Biology, Arizona State University and Athena Aktipis, Associate Professor of Psychology, Center for Evolution and Medicine, Arizona State University.

We are evolutionary biologists who study how cooperation and conflict occur inside the human body, including the ways cancer can evolve to exploit the body. Our systematic review examines how diet and the microbiome affect the ways the cells in your body interact with each other and either increase or decrease your risk of cancer.

Cancer is a Breakdown of Cell Cooperation

Every human body is a symphony of multicellular cooperation. Thirty trillion cells cooperate and coordinate with each other to make us viable multicellular organisms.

For multicellular cooperation to work, cells must engage in behaviors that serve the collective. These include controlled cell division, proper cell death, resource sharing, division of labor and protection of the extracellular environment. Multicellular cooperation is what allows the body to function effectively. If genetic mutations interfere with these proper behaviors, they can lead to the breakdown of cellular cooperation and the emergence of cancer.

Cancer cells can be thought of as cellular cheaters because they do not follow the rules of cooperative behavior. They mutate uncontrollably, evade cell death and take up excessive resources at the expense of the other cells. As these cheater cells replicate, cancer in the body begins to grow.

Cancer is fundamentally a problem of having multiple cells living together in one organism. As such, it has been around since the origins of multicellular life. This means that cancer suppression mechanisms have been evolving for hundreds of millions of years to help keep would-be cancer cells in check. Cells monitor themselves for mutations and induce cell death, also known as apoptosis, when necessary. Cells also monitor their neighbors for evidence of abnormal behavior, sending signals to aberrant cells to induce apoptosis. In addition, the body’s immune system monitors tissues for cancer cells to destroy them.

Cells that are able to evade detection, avoid apoptosis and replicate quickly have an evolutionary advantage within the body over cells that behave normally. This process within the body, called somatic evolution, is what leads cancer cells to grow and make people sick.

Microbes Can Help or Hinder Cell Cooperation

Microbes can affect cancer risk through changing the ways that the cells of the body interact with one another.

Some microbes can protect against cancer by helping maintain a healthy environment in the gut, reducing inflammation and DNA damage, and even by directly limiting tumor growth. Cancer-protective microbes like Lactobacillus pentosus, Lactobacillus gasseri and Bifidobacterium bifidum are found in the environment and different foods, and can live in the gut. These microbes promote cooperation among cells and limit the function of cheating cells by strengthening the body’s cancer defenses. Lactobacillus acidophilus, for example, increases the production of a protein called IL-12 that stimulates immune cells to act against tumors and suppress their growth.

Other microbes can promote cancer by inducing mutations in healthy cells that make it more likely for cellular cheaters to emerge and outcompete cooperative cells. Cancer-inducing microbes such as Enterococcus faecalis, Helicobacter pylori and Papillomavirus are associated with increased tumor burden and cancer progression. They can release toxins that damage DNA, change gene expression and increase the proliferation of tumor cells. Helicobacter pylori, for example, can induce cancer by secreting a protein called Tipα that can penetrate cells, alter their gene expression and drive gastric cancer.

Healthy Diet with Cancer-Protective Microbes

Because what you eat determines the amount of cancer-inducing and cancer-preventing microbes inside your body, we believe that the microbes we consume and cultivate are an important component of a healthy diet.

Beneficial microbes are typically found in fermented and plant-based diets, which include foods like vegetables, fruits, yogurt and whole grains. These foods have high nutritional value and contain microbes that increase the immune system’s ability to fight cancer and lower overall inflammation. High-fiber foods are prebiotic in the sense that they provide resources that help beneficial microbes thrive and subsequently provide benefits for their hosts. Many cancer-fighting microbes are abundantly present in fermented and high-fiber foods.

In contrast, harmful microbes can be found in highly-processed and meat-based diets. The Western diet, for example, contains an abundance of red and processed meats, fried food and high-sugar foods. It has been long known that meat-based diets are linked to higher cancer prevalence, and that red meat is a carcinogen. Studies have shown that meat-based diets are associated with cancer-inducing microbes including Fusobacteria and Peptostreptococcus in both humans and other species.

Microbes can enhance or interfere with how the body’s cells cooperate to prevent cancer. We believe that purposefully cultivating a microbiome that promotes cooperation among our cells can help reduce cancer risk.

Game of Chicken With the US Economy Getting Under Way

Image Credit: US Embassy, South Africa (Flickr)

US Debt Default Could Trigger Dollar’s Collapse – and Severely Erode America’s Political and Economic Might

Republicans, who regained control of the House of Representatives in November 2022, are threatening to not allow an increase in the debt limit unless spending cuts are agreed to. In so doing, there is a risk of the U.S. government could move into default.

Brinkmanship over the debt ceiling has become a regular ritual – it happened under the Clinton administration in 1995, then again with Barack Obama as president in 2011, and more recently in 2021.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Michael Humphries, Deputy Chair of Business Administration, Touro University

As an economist, I know that defaulting on the national debt would have real-life consequences. Even the threat of pushing the U.S. into default has an economic impact. In August 2021, the mere prospect of a potential default led to an unprecedented downgrade of the the nation’s credit rating, hurting America’s financial prestige as well as countless individuals, including retirees.

And that was caused by the mere specter of default. An actual default would be far more damaging.

Dollar’s Collapse

Possibly the most serious consequence would be the collapse of the U.S. dollar and its replacement as global trade’s “unit of account.” That essentially means that it is widely used in global finance and trade.

Day to day, most Americans are likely unaware of the economic and political power that goes with being the world’s unit of account. Currently, more than half of world trade – from oil and gold to cars and smartphones – is in U.S. dollars, with the euro accounting for around 30% and all other currencies making up the balance.

As a result of this dominance, the U.S. is the only country on the planet that can pay its foreign debt in its own currency. This gives both the U.S. government and American companies tremendous leeway in international trade and finance.

No matter how much debt the U.S. government owes foreign investors, it can simply print the money needed to pay them back – although for economic reasons, it may not be wise to do so. Other countries must buy either the dollar or the euro to pay their foreign debt. And the only way for them to do so is to either to export more than they import or borrow more dollars or euros on the international market.

The U.S. is free from such constraints and can run up large trade deficits – that is, import more than it exports – for decades without the same consequences.

For American companies, the dominance of the dollar means they’re not as subject to the exchange rate risk as are their foreign competitors. Exchange rate risk refers to how changes in the relative value of currencies may affect a company’s profitability.

Since international trade is generally denominated in dollars, U.S. businesses can buy and sell in their own currency, something their foreign competitors cannot do as easily. As simple as this sounds, it gives American companies a tremendous competitive advantage.

If Republicans push the U.S. into default, the dollar would likely lose its position as the international unit of account, forcing the government and companies to pay their international bills in another currency.

Loss of Political Power Too

Since most foreign trade is denominated in the dollar, trade must go through an American bank at some point. This is one important way dollar dominance gives the U.S. tremendous political power, especially to punish economic rivals and unfriendly governments.

For example, when former President Donald Trump imposed economic sanctions on Iran, he denied the country access to American banks and to the dollar. He also imposed secondary sanctions, which means that non-American companies trading with Iran were also sanctioned. Given a choice of access to the dollar or trading with Iran, most of the world economies chose access to the dollar and complied with the sanctions. As a result, Iran entered a deep recession, and its currency plummeted about 30%.

President Joe Biden did something similar against Russia in response to its invasion of Ukraine. Limiting Russia’s access to the dollar has helped push the country into a recession that’s bordering on a depression.

No other country today could unilaterally impose this level of economic pain on another country. And all an American president currently needs is a pen.

Rivals Rewarded

Another consequence of the dollar’s collapse would be enhancing the position of the U.S.‘s top rival for global influence: China.

While the euro would likely replace the dollar as the world’s primary unit of account, the Chinese yuan would move into second place.

If the yuan were to become a significant international unit of account, this would enhance China’s international position both economically and politically. As it is, China has been working with the other BRIC countries – Brazil, Russia and India – to accept the yuan as a unit of account. With the other three already resentful of U.S. economic and political dominance, a U.S. default would support that effort.

They may not be alone: Recently, Saudi Arabia suggested it was open to trading some of its oil in currencies other than the dollar – something that would change long-standing policy.

Severe Consequences

Beyond the impact on the dollar and the economic and political clout of the U.S., a default would be profoundly felt in many other ways and by countless people.

In the U.S., tens of millions of Americans and thousands of companies that depend on government support could suffer, and the economy would most likely sink into recession – or worse, given the U.S. is already expected to soon suffer a downturn. In addition, retirees could see the worth of their pensions dwindle.

The truth is, we really don’t know what will happen or how bad it will get. The scale of the damage caused by a U.S. default is hard to calculate in advance because it has never happened before.

But there’s one thing we can be certain of. If there is a default, the U.S. and Americans will suffer tremendously.

How Does the Moderna Cancer Vaccine Work?

Moderna is testing an mRNA vaccine in combination with pembrolizumab to treat melanoma (The Conversation)

Moderna’s Experimental Cancer Vaccine Treats But Doesn’t Prevent Melanoma – a Biochemist Explains How it Works

Media outlets have reported the encouraging findings of clinical trials for a new experimental vaccine developed by the biotech company Moderna to treat an aggressive type of skin cancer called melanoma.

Although this is potentially very good news, it occurred to me that the headlines may be unintentionally misleading. The vaccines most people are familiar with prevent disease, whereas this experimental new skin cancer vaccine treats only patients who are already sick. Why is it called a vaccine if it does not prevent cancer?

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Mark R. O’Brian, Professor and Chair of Biochemistry, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo.

I am a biochemist and molecular biologist studying the roles that microbes play in health and disease. I also teach cancer genetics to medical students and am interested in how the public understands science. While preventive and therapeutic vaccines are administered for different health care goals, they both train the immune system to recognize and fight off a specific disease agent that causes illness.

Melanoma is an aggressive form of skin cancer

How Do Preventive Vaccines Work?

Most vaccines are administered to healthy people before they get sick to prevent illnesses caused by viruses or bacteria. These include vaccines that prevent polio, measles, COVID-19 and many other diseases. Researchers have also developed vaccines to prevent some types of cancers that are caused by such viruses as the human papillomaviruses and Epstein-Barr virus.

Your immune system recognizes objects such as certain microbes and allergens that do not belong in your body and initiates a series of cellular events to attack and destroy them. Thus, a virus or bacterium that enters the body is recognized as something foreign and triggers an immune response to fight off the microbial invader. This results in a cellular memory that will elicit an even faster immune response the next time the same microbe intrudes.

The problem is that sometimes the initial infection causes serious illness before the immune system can mount a response against it. While you may be better protected against a second infection, you have suffered the potentially damaging consequences of the first one.

This is where preventive vaccines come in. By introducing a harmless version or a portion of the microbe to the immune system, the body can learn to mount an effective response against it without causing the disease.

For example, the Gardasil-9 vaccine protects against the human papillomavirus, or HPV, which causes cervical cancer. It contains protein components found in the virus that cannot cause disease but do elicit an immune response that protects against future HPV infection, thereby preventing cervical cancer.

How Does the Moderna Cancer Vaccine Work?

Unlike cervical cancer, skin melanoma isn’t caused by a viral infection, according the latest evidence. Nor does Moderna’s experimental vaccine prevent cancer as Gardasil-9 does.

The Moderna vaccine trains the immune system to fight off an invader in the same way preventive vaccines most people are familiar with do. However, in this case the invader is a tumor, a rogue version of normal cells that harbors abnormal proteins that the immune system can recognize as foreign and attack.

What are these abnormal proteins and where do they come from?

All cells are made up of proteins and other biological molecules such as carbohydrates, lipids and nucleic acids. Cancer is caused by mutations in regions of genetic material, or DNA, that encode instructions on what proteins to make. Mutated genes result in abnormal proteins called neoantigens that the body recognizes as foreign. That can trigger an immune response to fight off a nascent tumor. However, sometimes the immune response fails to subdue the cancer cells, either because the immune system is unable to mount a strong enough response or the cancer cells have found a way to circumvent the immune system’s defenses.

Moderna’s experimental melanoma vaccine contains genetic information that encodes for portions of the neoantigens in the tumor. This genetic information is in the form of mRNA, which is the same form used in the Moderna and Pfizer-BioNtech COVID-19 vaccines. Importantly, the vaccine cannot cause cancer, because it encodes for only small, nonfunctional parts of the protein. When the genetic information is translated into those protein pieces in the body, they trigger the immune system to mount an attack against the tumor. Ideally, this immune response will cause the tumor to shrink and disappear.

Notably, the Moderna melanoma vaccine is tailor-made for each patient. Each tumor is unique, and so the vaccine needs to be unique as well. To customize vaccines, researchers first biopsy the patient’s tumor to determine what neoantigens are present. The vaccine manufacturer then designs specific mRNA molecules that encode those neoantigens. When this custom mRNA vaccine is administered, the body translates the genetic material into proteins specific to the patient’s tumor, resulting in an immune response against the tumor.

Combining Vaccination with Immunotherapy

Vaccines are a form of immunotherapy, because they treat diseases by harnessing the immune system. However, other immunotherapy cancer drugs are not vaccines because, while they also stimulate the immune system, they do not target specific neoantigens.

In fact, the Moderna vaccine is co-administered with the immunotherapy drug pembrolizumab, which is marketed as Keytruda. Why are two drugs needed?

Certain immune cells called T-cells have molecular accelerator and brake components that serve as checkpoints to ensure they are revved up only in the presence of a foreign invader such as a tumor. However, sometimes tumor cells find a way to keep the T-cell brakes on and suppress the immune response. In these cases, the Moderna vaccine correctly identifies the tumor, but T-cells cannot respond to it.

Pembrolizumab, however, can bind directly to a brake component on the T-cell, inactivating the brake system and allowing the immune cells to attack the tumor.

Not a Preventive Cancer Vaccine

So why can’t the Moderna vaccine be administered to healthy people to prevent melanoma before it arises?

Cancers are highly variable from person to person. Each melanoma harbors a different neoantigen profile that cannot be predicted in advance. Therefore, a vaccine cannot be developed in advance of the illness.

The experimental mRNA melanoma vaccine, currently still in early-phase clinical trials, is an example of the new frontier of personalized medicine. By understanding the molecular basis of diseases, researchers can explore how their underlying causes vary among people, and offer personalized therapeutic options against those diseases.

What MRI and fMRI Scans of Programmers’ Brains Reveal

Image Credit: Alex Shipps (Canva)

This is Your Brain – This is Your Brain on Code

Steve Nadis | MIT CSAIL

Functional magnetic resonance imaging (fMRI), which measures changes in blood flow throughout the brain, has been used over the past couple of decades for a variety of applications, including “functional anatomy” — a way of determining which brain areas are switched on when a person carries out a particular task. fMRI has been used to look at people’s brains while they’re doing all sorts of things — working out math problems, learning foreign languages, playing chess, improvising on the piano, doing crossword puzzles, and even watching TV shows like “Curb Your Enthusiasm.”

One pursuit that’s received little attention is computer programming — both the chore of writing code and the equally confounding task of trying to understand a piece of already-written code. “Given the importance that computer programs have assumed in our everyday lives,” says Shashank Srikant, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), “that’s surely worth looking into. So many people are dealing with code these days — reading, writing, designing, debugging — but no one really knows what’s going on in their heads when that happens.” Fortunately, he has made some “headway” in that direction in a paper — written with MIT colleagues Benjamin Lipkin (the paper’s other lead author, along with Srikant), Anna Ivanova, Evelina Fedorenko, and Una-May O’Reilly — that was presented earlier this month at the Neural Information Processing Systems Conference held in New Orleans.

The new paper built on a 2020 study, written by many of the same authors, which used fMRI to monitor the brains of programmers as they “comprehended” small pieces, or snippets, of code. (Comprehension, in this case, means looking at a snippet and correctly determining the result of the computation performed by the snippet.) The 2020 work showed that code comprehension did not consistently activate the language system, brain regions that handle language processing, explains Fedorenko, a brain and cognitive sciences (BCS) professor and a coauthor of the earlier study. “Instead, the multiple demand network — a brain system that is linked to general reasoning and supports domains like mathematical and logical thinking — was strongly active.” The current work, which also utilizes MRI scans of programmers, takes “a deeper dive,” she says, seeking to obtain more fine-grained information.

Whereas the previous study looked at 20 to 30 people to determine which brain systems, on average, are relied upon to comprehend code, the new research looks at the brain activity of individual programmers as they process specific elements of a computer program. Suppose, for instance, that there’s a one-line piece of code that involves word manipulation and a separate piece of code that entails a mathematical operation. “Can I go from the activity we see in the brains, the actual brain signals, to try to reverse-engineer and figure out what, specifically, the programmer was looking at?” Srikant asks. “This would reveal what information pertaining to programs is uniquely encoded in our brains.” To neuroscientists, he notes, a physical property is considered “encoded” if they can infer that property by looking at someone’s brain signals.

Take, for instance, a loop — an instruction within a program to repeat a specific operation until the desired result is achieved — or a branch, a different type of programming instruction than can cause the computer to switch from one operation to another. Based on the patterns of brain activity that were observed, the group could tell whether someone was evaluating a piece of code involving a loop or a branch. The researchers could also tell whether the code related to words or mathematical symbols, and whether someone was reading actual code or merely a written description of that code.

That addressed a first question that an investigator might ask as to whether something is, in fact, encoded. If the answer is yes, the next question might be: where is it encoded? In the above-cited cases — loops or branches, words or math, code or a description thereof — brain activation levels were found to be comparable in both the language system and the multiple demand network.

A noticeable difference was observed, however, when it came to code properties related to what’s called dynamic analysis.

Programs can have “static” properties — such as the number of numerals in a sequence — that do not change over time. “But programs can also have a dynamic aspect, such as the number of times a loop runs,” Srikant says. “I can’t always read a piece of code and know, in advance, what the run time of that program will be.” The MIT researchers found that for dynamic analysis, information is encoded much better in the multiple demand network than it is in the language processing center. That finding was one clue in their quest to see how code comprehension is distributed throughout the brain — which parts are involved and which ones assume a bigger role in certain aspects of that task.

The team carried out a second set of experiments, which incorporated machine learning models called neural networks that were specifically trained on computer programs. These models have been successful, in recent years, in helping programmers complete pieces of code. What the group wanted to find out was whether the brain signals seen in their study when participants were examining pieces of code resembled the patterns of activation observed when neural networks analyzed the same piece of code. And the answer they arrived at was a qualified yes.

“If you put a piece of code into the neural network, it produces a list of numbers that tells you, in some way, what the program is all about,” Srikant says. Brain scans of people studying computer programs similarly produce a list of numbers. When a program is dominated by branching, for example, “you see a distinct pattern of brain activity,” he adds, “and you see a similar pattern when the machine learning model tries to understand that same snippet.”

Mariya Toneva of the Max Planck Institute for Software Systems considers findings like this “particularly exciting. They raise the possibility of using computational models of code to better understand what happens in our brains as we read programs,” she says.

The MIT scientists are definitely intrigued by the connections they’ve uncovered, which shed light on how discrete pieces of computer programs are encoded in the brain. But they don’t yet know what these recently-gleaned insights can tell us about how people carry out more elaborate plans in the real world. Completing tasks of this sort — such as going to the movies, which requires checking showtimes, arranging for transportation, purchasing tickets, and so forth — could not be handled by a single unit of code and just a single algorithm. Successful execution of such a plan would instead require “composition” — stringing together various snippets and algorithms into a sensible sequence that leads to something new, just like assembling individual bars of music in order to make a song or even a symphony. Creating models of code composition, says O’Reilly, a principal research scientist at CSAIL, “is beyond our grasp at the moment.”

Lipkin, a BCS PhD student, considers this the next logical step — figuring out how to “combine simple operations to build complex programs and use those strategies to effectively address general reasoning tasks.” He further believes that some of the progress toward that goal achieved by the team so far owes to its interdisciplinary makeup. “We were able to draw from individual experiences with program analysis and neural signal processing, as well as combined work on machine learning and natural language processing,” Lipkin says. “These types of collaborations are becoming increasingly common as neuro- and computer scientists join forces on the quest towards understanding and building general intelligence.”

Reprinted with permission from MIT News” ( http://news.mit.edu/ )

A Dirty Challenge for Autonomous Vehicle Designers

Image Credit: Christine Daniloff (MIT)

Computers that Power Self-Driving Cars Could be a Huge Driver of Global Carbon Emissions

Adam Zewe | MIT News Office

In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.

That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.

The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.

The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.

“If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.

Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears today in the January-February issue of IEEE Micro.

Modeling Emissions

The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a backup human driver.

The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.

“On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.

For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving, and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that didn’t exist yet.

To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network, because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates simultaneously.

When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.

For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).

“After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.

Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.

Keeping Emissions in Check

To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.

One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.

In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.

Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.

While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.

“We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.

Reprinted with permission of MIT News” (http://news.mit.edu/)

Organs-On-A-Chip Minimize Late-Stage Drug Development Failures

Image: Lung-on-a-Chip,  National Center for Advancing Translational Sciences (Flickr)

Organ-On-A-Chip Models Allow Researchers to Conduct Studies Closer to Real-Life Conditions – and Possibly Grease the Drug Development Pipeline

Bringing a new drug to market costs billions of dollars and can take over a decade. These high monetary and time investments are both strong contributors to today’s skyrocketing health care costs and significant obstacles to delivering new therapies to patients. One big reason behind these barriers is the lab models researchers use to develop drugs in the first place.

Preclinical trials, or studies that test a drug’s efficacy and toxicity before it enters clinical trials in people, are mainly conducted on cell cultures and animals. Both are limited by their poor ability to mimic the conditions of the human body. Cell cultures in a petri dish are unable to replicate every aspect of tissue function, such as how cells interact in the body or the dynamics of living organs. And animals are not humans – even small genetic differences between species can be amplified to major physiological differences.

Fewer than 8% of successful animal studies for cancer therapies make it to human clinical trials. Because animal models often fail to predict drug effects in human clinical trials, these late-stage failures can significantly drive up both costs and patient health risks.

To address this translation problem, researchers have been developing a promising model that can more closely mimic the human body – organ-on-a-chip.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Chengpeng Chen, Assistant Professor of Chemistry and Biochemistry, University of Maryland, Baltimore County

As an analytical chemist, I have been working to develop organ and tissue models that avoid the simplicity of common cell cultures and the discrepancies of animal models. I believe that, with further development, organs-on-chips can help researchers study diseases and test drugs in conditions that are closer to real life.

What are Organs-On-Chips?

In the late 1990s, researchers figured out a way to layer elastic polymers to control and examine fluids at a microscopic level. This launched the field of microfluidics, which for the biomedical sciences involves the use of devices that can mimic the dynamic flow of fluids in the body, such as blood.

Advances in microfluidics have provided researchers a platform to culture cells that function more closely to how they would in the human body, specifically with organs-on-chips. The “chip” refers to the microfluidic device that encases the cells. They’re commonly made using the same technology as computer chips.

Not only do organs-on-chips mimic blood flow in the body, these platforms have microchambers that allow researchers to integrate multiple types of cells to mimic the diverse range of cell types normally present in an organ. The fluid flow connects these multiple cell types, allowing researchers to study how they interact with each other.

This technology can overcome the limitations of both static cell cultures and animal studies in several ways. First, the presence of fluid flowing in the model allows it to mimic both what a cell experiences in the body, such as how it receives nutrients and removes wastes, and how a drug will move in the blood and interact with multiple types of cells. The ability to control fluid flow also enables researchers to fine-tune the optimal dosing for a particular drug.

The lung-on-a-chip model, for instance, is able to integrate both the mechanical and physical qualities of a living human lung. It’s able to mimic the dilation and contraction, or inhalation and exhalation, of the lung and simulate the interface between the lung and air. The ability to replicate these qualities allows researchers to better study lung impairment across different factors.

Bringing Organs-On-Chips to Scale

While organ-on-a-chip pushes the boundaries of early-stage pharmaceutical research, the technology has not been widely integrated into drug development pipelines. I believe that a core obstacle for wide adoption of such chips is its high complexity and low practicality.

Current organ-on-a-chip models are difficult for the average scientist to use. Also, because most models are single-use and allow only one input, which limits what researchers can study at a given time, they are both expensive and time- and labor-intensive to implement. The high investments required to use these models might dampen enthusiasm to adopt them. After all, researchers often use the least complex models available for preclinical studies to reduce time and cost.

This chip mimics the blood-brain barrier. The blue dye marks where brain cells would go, and the red dye marks the route of blood flow. Vanderbilt University/Flickr

Lowering the technical bar to make and use organs-on-chips is critical to allowing the entire research community to take full advantage of their benefits. But this does not necessarily require simplifying the models. My lab, for example, has designed various “plug-and-play” tissue chips that are standardized and modular, allowing researchers to readily assemble premade parts to run their experiments.

The advent of 3D printing has also significantly facilitated the development of organ-on-a-chip, allowing researchers to directly manufacture entire tissue and organ models on chips. 3D printing is ideal for fast prototyping and design-sharing between users and also makes it easy for mass production of standardized materials.

I believe that organs-on-chips hold the potential to enable breakthroughs in drug discovery and allow researchers to better understand how organs function in health and disease. Increasing this technology’s accessibility could help take the model out of development in the lab and let it make its mark on the biomedical industry.

The Pros, Cons, and Many Definitions of ‘Gig’ Work

Image Credit: Stock Catalog

What’s a ‘Gig’ Job? How it’s Legally Defined Affects Workers’ Rights and Protections

The “gig” economy has captured the attention of technology futurists, journalists, academics and policymakers.

“Future of work” discussions tend toward two extremes: breathless excitement at the brave new world that provides greater flexibility, mobility and entrepreneurial energy, or dire accounts of its immiserating impacts on the workers who labor beneath the gig economy’s yoke.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of David Weil, Visiting Senior Faculty Fellow, Ash Center for Democracy Harvard Kennedy School / Professor, Heller School for Social Policy and Management, Brandeis University.

These widely diverging views may be partly due to the many definitions of what constitutes “gig work” and the resulting difficulties in measuring its prevalence. As an academic who has studied workplace laws for decades and ran the federal agency that enforces workplace protections during the Obama administration, I know the way we define, measure and treat gig workers under the law has significant consequences for workers. That’s particularly true for those lacking leverage in the labor market.

While there are benefits for workers for this emerging model of employment, there are pitfalls as well. Confusion over the meaning and size of the gig workforce – at times the intentional work of companies with a vested economic interest – can obscure the problems gig status can have on workers’ earnings, workplace conditions and opportunities.

Defining Gig Work

Many trace the phrase “gig economy” to a 2009 essay in which editor and author Tina Brown proclaimed: “No one I know has a job anymore. They’ve got Gigs.”

Although Brown focused on professional and semiprofessional workers chasing short-term work, the term soon applied to a variety of jobs in low-paid occupations and industries. Several years later, the rapid ascent of Uber, Lyft and DoorDash led the term gig to be associated with platform and digital business models. More recently, the pandemic linked gig work to a broader set of jobs associated with high turnover, limited career prospects, volatile wages and exposure to COVID-19 uncertainties.

The imprecision of gig, therefore, connotes different things: Some uses focus on the temporary or “contingent” nature of the work, such as jobs that may be terminated at any time, usually at the discretion of the employer. Other definitions focus on the unpredictability of work in terms of earnings, scheduling, hours provided in a workweek or location. Still other depictions focus on the business structure through which work is engaged – a staffing agency, digital platform, contractor or other intermediary. Further complicating the definition of gig is whether the focus is on a worker’s primary source of income or on side work meant to supplement income.

Measuring Gig Work

These differing definitions of gig work have led to widely varying estimates of its prevalence.

A conservative estimate from the Bureau of Labor Statistics household-based survey of “alternative work arrangements” suggests that gig workers “in non-standard categories” account for about 10% of employment. Alternatively, other researchers estimate the prevalence as three times as common, or 32.5%, using a Federal Reserve survey that broadly defines gig work to include any work that is temporary and variable in nature as either a primary or secondary source of earnings. And when freelancing platform Upworks and consulting firm McKinsey & Co. use a broader concept of “independent work,” they report rates as high as 36% of employed respondents.

No consensus definition or measurement approach has emerged, despite many attempts, including a 2020 panel report by the National Academies of Sciences, Engineering, and Medicine. Various estimates do suggest several common themes, however: Gig work is sizable, present in both traditional and digital workplaces, and draws upon workers across the age, education, demographic and skill spectrum.

Why it Matters

As the above indicates, gig workers can range from high-paid professionals working on a project-to-project basis to low-wage workers whose earnings are highly variable, who work in nonprofessional or semiprofessional occupations and who accept – by choice or necessity – volatile hours and a short-term time commitment from the organization paying for that work.

Regardless of their professional status, many workers operating in gig arrangements are classified as independent contractors rather than employees. As independent contractors, workers lose rights to a minimum wage, overtime and a safe and healthy work environment as well as protections against discrimination and harassment. Independent contractors also lose access to unemployment insurance, workers’ compensation and paid sick leave now required in many states.

Federal and state laws differ in the factors they draw on to make that call. A key concept underlying that determination is how “economically dependent” the worker is on the employer or contracting party. Greater economic independence – for example, the ability to determine price of service, how and where tasks are done and opportunities for expanding or contracting that work based on the individual’s own skills, abilities and enterprise – suggest a role as an independent contractor.

In contrast, if the hiring party basically calls the shots – for example, controlling what the individual does, how they do their work and when they do it, what they are permitted to do and not do, and what performance is deemed acceptable – this suggests employee status. That’s because workplace laws are generally geared toward employees and seek to protect workers who have unequal bargaining leverage in the labor market, a concept based on long-standing Supreme Court precedent.

Making Work More Precarious

Over the past few decades, a growing number of low-wage workers find themselves in gig work situations – everything from platform drivers and delivery personnel to construction laborers, distribution workers, short-haul truck drivers and home health aides. Taken together, the grouping could easily exceed 20 million workers.

Many companies have incentives to classify these workers as independent contractors in order to reduce costs and risks, not because of a truly transformed nature of work where those so classified are real entrepreneurs or self-standing businesses.

Since gig work tends to be volatile and contingent, losing employment protections amplifies the precariousness of work. A business using misclassified workers can gain cost advantages over competitors who treat their workers as employees as required by the law. This competitive dynamic can spread misclassification to new companies, industries and occupations – a problem we addressed directly, for example, in construction cases when I led the Wage and Hour Division and more recently in several health care cases.

The future of work is not governed by immutable technological forces but involves volitional private and public choices. Navigating to that future requires weighing the benefits gig work can provide some workers with greater economic independence against the continuing need to protect and bestow rights for the many workers who will continue to play on a very uneven playing field in the labor market.