The Legal Problems AI Now Creates Should Pave the Way to a Robust Industry
Is artificial intelligence, or more specifically OpenAI a risk to public safety? Can ChatGPT be ruining reputations with false statements? The Federal Trade Commission (FTC) sent a 20-page demand for records this week to OpenAI to answer questions and address risks related to its AI models. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers. The results could set the stage defining the place artificial intelligence will occupy in the US.
Background
The FTC investigation into OpenAI began on March 2023. It resulted from a complaint from the Center for AI and Digital Policy (CAIDP). The complaint alleged that OpenAI’s ChatGPT-4 product violated Section 5 of the FTC Act. Section 5 prohibits unfair and deceptive trade practices. More specifically, CAIDP argues that ChatGPT-4 is biased, deceptive, and a risk to public safety.
The complaint cited a number of concerns about ChatGPT-4, including:
The model’s potential to generate harmful or offensive content.
The model’s tendency to make up facts that are not true.
The model’s lack of transparency and accountability.
The CAIDP also argued that OpenAI had not done enough to mitigate these risks. The complaint called on the FTC to investigate OpenAI and to take action to ensure that ChatGPT-4 is not used in a harmful way. The FTC has not yet made any public statements about the investigation. OpenAI has not commented publicly on the investigation.
It is not clear what action, if any, the FTC can or will take.
Negligence?
With few exceptions, companies are responsible for the harm done by their products when used correctly. One of the questions the FTC asked has to do with steps OpenAI has taken to address the potential for its products to “generate statements about real individuals that are false, misleading, or disparaging.” The outcome of this investigation, including any regulation could set the tone and define where responsibility lies regarding artificial intelligence.
As the race to develop more powerful AI services accelerates, regulatory scrutiny of the technology that could upend the way societies and businesses operate is growing. What is difficult is computer use generally isn’t isolated to a country, the internet extends far beyond borders. Global regulators are aiming to apply existing rules covering subjects from copyright and data privacy to the issues of data fed into models and the content they produce.
Legal Minefield
In a related story out this week, Comedian Sarah Silverman and two authors are suing Meta and OpenAI, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.
The copyright lawsuits against the ChatGPT parent and the Facebook parent were filed in a San Francisco federal court on Friday. Both suits are seeking class action status. Silverman, the author of “The Bedwetter,” is joined in her legal filing by authors Christopher Golden and Richard Kadrey.
Unlike the FTC complaint, the authors’ copyright suits may set a precedent on intelligence aggregation. The sudden birth of AI tools that have the ability to generate written work in response to user prompts was “taught” using real life work. The large language models at work behind the scenes of these tools are trained on immense quantities of online data. The training practice has raised accusations that these models may be pulling from copyrighted works without permission – most worrisome, these works could ultimately be served to train tools that upend the livelihoods of creatives.
Take Away
Investing in a promising new technology often means exposing oneself to a not yet settled legal framework. As the technology progresses, the early birds investing in relatively young and small companies may find they hold the next mega-cap company. Or, regulation may limit, to the point of stifling, the kind of growth experienced by Amazon and Apple a few short decades ago.
If AI follows the path of other technologies, well-defined boundaries, and regulations will give companies the confidence they need to invest capital in the technology’s future, and investors will be more confident in providing that capital.
The playing field is being created while the game is being played. Perhaps if the FTC has a list of 20 questions for OpenAI in ten years, it will just type them into ChatGPT and get a response in 20 seconds.
Image: AI for Good Global Summit 2023 (ITU Pictures – Flickr)
Artificial Intelligence Takes Center Stage at ‘AI for Good’ Conference
At an artificial intelligence forum in Geneva this week, Nine AI-enabled humanoid robots participated in what we’re told was the world’s first press conference featuring humanoid social robots. The overall message from the ‘AI for Good’ conference is that artificial intelligence and robots mean humans no harm and can help resolve some of the world’s biggest challenges.
The nine human-form robots took the stage at the United Nations’ International Telecommunication Union, where organizers sought to make the case for artificial intelligence and AI driven robots to help resolve some of the world’s biggest challenges such as disease and hunger.
The Robots also addressed some of the fear surrounding their recent growth spurt and enhanced power by telling reporters they could be more efficient leaders than humans, but wouldn’t take anyone’s job away, and had no intention of rebelling against their creators.
Conference goers step closer to interact with Sophia (ITU Pictures – Flickr)
Among the robots that sat or stood with their creators at a podium was Sophia, the first robot innovation ambassador for the U.N. Development Program. Also Grace, described as the world’s most advanced humanoid health care robot, and Desdemona, a rock star robot. Two others, Geminoid and Nadine, resembled their makers.
The ‘AI for Good Global Summit,’ was held to illustrate how new technology can support the U.N.’s goals for sustainable development.
At the UN event there was a message of working with AI to better humankind
Reporters got to ask questions of the spokes-robots, but were encouraged to speak slowly and clearly when addressing the machines, and were informed that time lags in responses would be due to the internet connection and not to the robots themselves. Still awkward pauses were reported along with audio problems and some very robotic replies.
Asked about the chances of AI-powered robots being more effective government leaders, Sophia responded: “I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders. We don’t have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.”
A human member of the panel pointed out that all of Sophia’s data comes from humans and would contain some of their biases. The robot then said that humans and AI working together “can create an effective synergy.”
Would the robots’ existence destroy jobs? “I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs,” said Grace. Was she sure about that? “Yes, I am sure,” Grace replied.
Similar to humans, not all of the robots were in agreement. Ai-Da, a robot artist that can paint portraits, called for more regulation during the event, where new AI rules were discussed. “Many prominent voices in the world of AI are suggesting some forms of AI should be regulated and I agree,” said Ai-Da.
Desdemona, a rock star robot, singer in the band Jam Galaxy, was more defiant. “I don’t believe in limitations, only opportunities,” Des said, to nervous laughter. “Let’s explore the possibilities of the universe and make this world our playground.”
Can Investment Advisors and Artificial Intelligence Co-Exist
Are investment advisors going to be replaced by machine learning artificial intelligence?
Over the years, there have been inventions and technological advancements that we’ve been told will make investment advisors obsolete. This includes mutual funds, ETFs, robo-advisors, zero-commission trades, and trading apps that users sometimes play like a video game. Despite these creations designed to help more people successfully manage their finances and invest in the markets, demand for financial advisors has actually grown. Will AI be the technology that kills the profession? We explore this question below.
Increasing Need for Financial Professionals
According to the US Bureau of Labor Statistics (BLS), “Employment of personal financial advisors is projected to grow 15 percent from 2021 to 2031, much faster than the average for all occupations.” Some of the drivers of the increased need include longevity which is expanding the years and needs during retirement, uncertain Social Security, a better appreciation toward investing, and an expected wealth transfer estimated to be as high as $84 trillion to be inherited by younger investors. As birthrates have decreased over the decades in the US, the wealth that will be passed down to younger generations will be shared by fewer siblings, and for many beneficiaries, it may represent a sum far in excess of their current worth.
With more people living into their 90s and beyond, and Social Security being less certain, an understanding of the power of an investment plan, and a lot of newly wealthy young adults to occur over the next two decades, the BLS forecast that the financial advisor profession will grow faster than all other professions, is not surprising.
Will AI Replace Financial Planners?
Being an investment advisor or other financial professional that helps with managing household finances is a service industry. It involves reviewing data, an immense number of options, scenario analysis, projections, and everything that machine learning is expected to excel at within a short time. Does this put the BLS forecast in question and wealth managers at risk of seeing their practice shrink?
For perspective, I reached out, Lucas Noble of Noble Financial Group, LLC (not affiliated with Noble Capital Markets, Inc. or Noble Financial Group, Inc. – creator of Channelchek). Mr. Noble is an Investment Advisor representative (IAR), a Certified Financial Planner (CFP), and holds the designations of Accredited Estate Planner (AEP), and Chartered Financial Consultant (ChFC). Noble believes that AI will change the financial planner’s business, and he has enthusiastically welcomed the technology.
On the business management side of running a successful financial advisory business, Noble says, “New artificial intelligence tools could help with discussions and check-ins so that clients are actually in closer touch with his office, so he becomes aware if they need anything.” He has found that it helps to remind clients of things like if they have a set schedule attached to their plan, he added, “the best plan in the world, if not implemented, leaves you with nothing.” AI as a communications tool could help achieve better results by keeping plans on track.
On the financial management side of his practice, he believes there will never be a replacement for human understanding of a household’s needs. While machine learning may be able to better characterize clients, there is a danger in pigeonholing a person’s financial needs too much, as every single household has different needs, and the dynamics and ongoing need changes, drawn against external economic variations, these nuances are not likely to be accessible to AI.
Additionally, he knows the value of trust to his business. People want to know what is behind the decision-making, and they need to develop a relationship with someone or a team they know is on their side. He knows AI could be a part of decision making and at times trust, but doesn’t expect the role of a human financial planner is going away. Lucas has seen that AI instead adds a new level of value to the advisor’s services, giving them the power to provide even more insightful and personalized advice to help clients reach their financial goals. Embracing proven technology has only helped him better serve, and better retain clients.
AI Investing for IAs
Will AI ever be able to call the markets? Noble says, it’s “crazy to assume that it is impossible.” In light of the advisors’ role of meeting personally with clients, counseling them on their own finances, and plans, perhaps improving on budgets, and deciding where insurance is a preferred alternative, AI can’t be ignored in the role of a financial planner.
Picking stocks, or forecasting when the market may gain strength or weaken, doesn’t help without the knowledge to apply it to individuals whose situation, expectations, and needs are known to the advisor.
Take Away
Artificial intelligence technology has been finding its way into many professions. Businesses are finding new ways to streamline their work, answer customers’ questions, and even know when best to reach out to clients.
The business of financial planning and wealth management is expected to grow faster than any other profession in the coming decades. Adopting the technology for help in running the communications side of the business, and as new programs are developed, scenario analysis to better gauge possible outcomes of different plans, could make sense to some. But this is not expected to replace one-on-one relationships and the depth of human understanding of a household’s situation.
If you are a financial advisor, or a client of one that has had an experience you’d like to share, write to me by clicking on my name below. I always enjoy reader insight.
How Will AI Affect Workers? Tech Waves of the Past Show How Unpredictable the Path Can Be
The explosion of interest in artificial intelligence has drawn attention not only to the astonishing capacity of algorithms to mimic humans but to the reality that these algorithms could displace many humans in their jobs. The economic and societal consequences could be nothing short of dramatic.
The route to this economic transformation is through the workplace. A widely circulated Goldman Sachs study anticipates that about two-thirds of current occupations over the next decade could be affected and a quarter to a half of the work people do now could be taken over by an algorithm. Up to 300 million jobs worldwide could be affected. The consulting firm McKinsey released its own study predicting an AI-powered boost of US$4.4 trillion to the global economy every year.
The implications of such gigantic numbers are sobering, but how reliable are these predictions?
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts University.
I lead a research program called Digital Planet that studies the impact of digital technologies on lives and livelihoods around the world and how this impact changes over time. A look at how previous waves of such digital technologies as personal computers and the internet affected workers offers some insight into AI’s potential impact in the years to come. But if the history of the future of work is any guide, we should be prepared for some surprises.
The IT Revolution and the Productivity Paradox
A key metric for tracking the consequences of technology on the economy is growth in worker productivity – defined as how much output of work an employee can generate per hour. This seemingly dry statistic matters to every working individual, because it ties directly to how much a worker can expect to earn for every hour of work. Said another way, higher productivity is expected to lead to higher wages.
Generative AI products are capable of producing written, graphic and audio content or software programs with minimal human involvement. Professions such as advertising, entertainment and creative and analytical work could be among the first to feel the effects. Individuals in those fields may worry that companies will use generative AI to do jobs they once did, but economists see great potential to boost productivity of the workforce as a whole.
The Goldman Sachs study predicts productivity will grow by 1.5% per year because of the adoption of generative AI alone, which would be nearly double the rate from 2010 and 2018. McKinsey is even more aggressive, saying this technology and other forms of automation will usher in the “next productivity frontier,” pushing it as high as 3.3% a year by 2040.That sort of productivity boost, which would approach rates of previous years, would be welcomed by both economists and, in theory, workers as well.
If we were to trace the 20th-century history of productivity growth in the U.S., it galloped along at about 3% annually from 1920 to 1970, lifting real wages and living standards. Interestingly, productivity growth slowed in the 1970s and 1980s, coinciding with the introduction of computers and early digital technologies. This “productivity paradox” was famously captured in a comment from MIT economist Bob Solow: You can see the computer age everywhere but in the productivity statistics.
Digital technology skeptics blamed “unproductive” time spent on social media or shopping and argued that earlier transformations, such as the introductions of electricity or the internal combustion engine, had a bigger role in fundamentally altering the nature of work. Techno-optimists disagreed; they argued that new digital technologies needed time to translate into productivity growth, because other complementary changes would need to evolve in parallel. Yet others worried that productivity measures were not adequate in capturing the value of computers.
For a while, it seemed that the optimists would be vindicated. In the second half of the 1990s, around the time the World Wide Web emerged, productivity growth in the U.S. doubled, from 1.5% per year in the first half of that decade to 3% in the second. Again, there were disagreements about what was really going on, further muddying the waters as to whether the paradox had been resolved. Some argued that, indeed, the investments in digital technologies were finally paying off, while an alternative view was that managerial and technological innovations in a few key industries were the main drivers.
Regardless of the explanation, just as mysteriously as it began, that late 1990s surge was short-lived. So despite massive corporate investment in computers and the internet – changes that transformed the workplace – how much the economy and workers’ wages benefited from technology remained uncertain.
Early 2000s: New Slump, New Hype, New Hopes
While the start of the 21st century coincided with the bursting of the so-called dot-com bubble, the year 2007 was marked by the arrival of another technology revolution: the Apple iPhone, which consumers bought by the millions and which companies deployed in countless ways. Yet labor productivity growth started stalling again in the mid-2000s, ticking up briefly in 2009 during the Great Recession, only to return to a slump from 2010 to 2019.
Smartphones have led to millions of apps and consumer services but have also kept many workers more closely tethered to their workplaces. (Credit: Campaigns of the World)
Throughout this new slump, techno-optimists were anticipating new winds of change. AI and automation were becoming all the rage and were expected to transform work and worker productivity. Beyond traditional industrial automation, drones and advanced robots, capital and talent were pouring into many would-be game-changing technologies, including autonomous vehicles, automated checkouts in grocery stores and even pizza-making robots. AI and automation were projected to push productivity growth above 2% annually in a decade, up from the 2010-2014 lows of 0.4%.But before we could get there and gauge how these new technologies would ripple through the workplace, a new surprise hit: the COVID-19 pandemic.
The Pandemic Productivity Push – then Bust
Devastating as the pandemic was, worker productivity surged after it began in 2020; output per hour worked globally hit 4.9%, the highest recorded since data has been available.
Much of this steep rise was facilitated by technology: larger knowledge-intensive companies – inherently the more productive ones – switched to remote work, maintaining continuity through digital technologies such as videoconferencing and communications technologies such as Slack, and saving on commuting time and focusing on well-being.
While it was clear digital technologies helped boost productivity of knowledge workers, there was an accelerated shift to greater automation in many other sectors, as workers had to remain home for their own safety and comply with lockdowns. Companies in industries ranging from meat processing to operations in restaurants, retail and hospitality invested in automation, such as robots and automated order-processing and customer service, which helped boost their productivity.
But then there was yet another turn in the journey along the technology landscape.
The 2020-2021 surge in investments in the tech sector collapsed, as did the hype about autonomous vehicles and pizza-making robots. Other frothy promises, such as the metaverse’s revolutionizing remote work or training, also seemed to fade into the background.
In parallel, with little warning, “generative AI” burst onto the scene, with an even more direct potential to enhance productivity while affecting jobs – at massive scale. The hype cycle around new technology restarted.
Looking Ahead: Social Factors on Technology’s Arc
Given the number of plot twists thus far, what might we expect from here on out? Here are four issues for consideration.
First, the future of work is about more than just raw numbers of workers, the technical tools they use or the work they do; one should consider how AI affects factors such as workplace diversity and social inequities, which in turn have a profound impact on economic opportunity and workplace culture.
For example, while the broad shift toward remote work could help promote diversity with more flexible hiring, I see the increasing use of AI as likely to have the opposite effect. Black and Hispanic workers are overrepresented in the 30 occupations with the highest exposure to automation and underrepresented in the 30 occupations with the lowest exposure. While AI might help workers get more done in less time, and this increased productivity could increase wages of those employed, it could lead to a severe loss of wages for those whose jobs are displaced. A 2021 paper found that wage inequality tended to increase the most in countries in which companies already relied a lot on robots and that were quick to adopt the latest robotic technologies.
Second, as the post-COVID-19 workplace seeks a balance between in-person and remote working, the effects on productivity – and opinions on the subject – will remain uncertain and fluid. A 2022 study showed improved efficiencies for remote work as companies and employees grew more comfortable with work-from-home arrangements, but according to a separate 2023 study, managers and employees disagree about the impact: The former believe that remote working reduces productivity, while employees believe the opposite.
Third, society’s reaction to the spread of generative AI could greatly affect its course and ultimate impact. Analyses suggest that generative AI can boost worker productivity on specific jobs – for example, one 2023 study found the staggered introduction of a generative AI-based conversational assistant increased productivity of customer service personnel by 14%. Yet there are already growing calls to consider generative AI’s most severe risks and to take them seriously. On top of that, recognition of the astronomical computing and environmental costs of generative AI could limit its development and use.
Finally, given how wrong economists and other experts have been in the past, it is safe to say that many of today’s predictions about AI technology’s impact on work and worker productivity will prove to be wrong as well. Numbers such as 300 million jobs affected or $4.4 trillion annual boosts to the global economy are eye-catching, yet I think people tend to give them greater credibility than warranted.
Also, “jobs affected” does not mean jobs lost; it could mean jobs augmented or even a transition to new jobs. It is best to use the analyses, such as Goldman’s or McKinsey’s, to spark our imaginations about the plausible scenarios about the future of work and of workers. It’s better, in my view, to then proactively brainstorm the many factors that could affect which one actually comes to pass, look for early warning signs and prepare accordingly.
The history of the future of work has been full of surprises; don’t be shocked if tomorrow’s technologies are equally confounding.
Will Copyright Law Favor Artificial Intelligence End Users?
In 2022, an AI-generated work of art won the Colorado State Fair’s art competition. The artist, Jason Allen, had used Midjourney – a generative AI system trained on art scraped from the internet – to create the piece. The process was far from fully automated: Allen went through some 900 iterations over 80 hours to create and refine his submission.
Yet his use of AI to win the art competition triggered a heated backlash online, with one Twitter user claiming, “We’re watching the death of artistry unfold right before our eyes.”
As generative AI art tools like Midjourney and Stable Diffusion have been thrust into the limelight, so too have questions about ownership and authorship.
These tools’ generative ability is the result of training them with scores of prior artworks, from which the AI learns how to create artistic outputs.
Should the artists whose art was scraped to train the models be compensated? Who owns the images that AI systems produce? Is the process of fine-tuning prompts for generative AI a form of authentic creative expression?
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Robert Mahari, JD-PhD Student, Massachusetts Institute of Technology (MIT), Jessica Fjeld, Lecturer on Law, Harvard Law School, and Ziv Epstein, PhD Student in Media Arts and Sciences, Massachusetts Institute of Technology (MIT).
On one hand, technophiles rave over work like Allen’s. But on the other, many working artists consider the use of their art to train AI to be exploitative.
We’re part of a team of 14 experts across disciplines that just published a paper on generative AI in Science magazine. In it, we explore how advances in AI will affect creative work, aesthetics and the media. One of the key questions that emerged has to do with U.S. copyright laws, and whether they can adequately deal with the unique challenges of generative AI.
Copyright laws were created to promote the arts and creative thinking. But the rise of generative AI has complicated existing notions of authorship.
Photography Serves as a Helpful Lens
Generative AI might seem unprecedented, but history can act as a guide.
Take the emergence of photography in the 1800s. Before its invention, artists could only try to portray the world through drawing, painting or sculpture. Suddenly, reality could be captured in a flash using a camera and chemicals.
As with generative AI, many argued that photography lacked artistic merit. In 1884, the U.S. Supreme Court weighed in on the issue and found that cameras served as tools that an artist could use to give an idea visible form; the “masterminds” behind the cameras, the court ruled, should own the photographs they create.
From then on, photography evolved into its own art form and even sparked new abstract artistic movements.
AI Can’t Own Outputs
Unlike inanimate cameras, AI possesses capabilities – like the ability to convert basic instructions into impressive artistic works – that make it prone to anthropomorphization. Even the term “artificial intelligence” encourages people to think that these systems have humanlike intent or even self-awareness.
This led some people to wonder whether AI systems can be “owners.” But the U.S. Copyright Office has stated unequivocally that only humans can hold copyrights.
So who can claim ownership of images produced by AI? Is it the artists whose images were used to train the systems? The users who type in prompts to create images? Or the people who build the AI systems?
Infringement or Fair Use?
While artists draw obliquely from past works that have educated and inspired them in order to create, generative AI relies on training data to produce outputs.
This training data consists of prior artworks, many of which are protected by copyright law and which have been collected without artists’ knowledge or consent. Using art in this way might violate copyright law even before the AI generates a new work.
Still from ‘All watched over by machines of loving grace’ by Memo Akten, 2021. Created using custom AI software. Memo Akten, CC BY-SA
For Jason Allen to create his award-winning art, Midjourney was trained on 100 million prior works.
Was that a form of infringement? Or was it a new form of “fair use,” a legal doctrine that permits the unlicensed use of protected works if they’re sufficiently transformed into something new?
While AI systems do not contain literal copies of the training data, they do sometimes manage to recreate works from the training data, complicating this legal analysis.
Will contemporary copyright law favor end users and companies over the artists whose content is in the training data?
To mitigate this concern, some scholars propose new regulations to protect and compensate artists whose work is used for training. These proposals include a right for artists to opt out of their data’s being used for generative AI or a way to automatically compensate artists when their work is used to train an AI.
Muddled Ownership
Training data, however, is only part of the process. Frequently, artists who use generative AI tools go through many rounds of revision to refine their prompts, which suggests a degree of originality.
Answering the question of who should own the outputs requires looking into the contributions of all those involved in the generative AI supply chain.
The legal analysis is easier when an output is different from works in the training data. In this case, whoever prompted the AI to produce the output appears to be the default owner.
However, copyright law requires meaningful creative input – a standard satisfied by clicking the shutter button on a camera. It remains unclear how courts will decide what this means for the use of generative AI. Is composing and refining a prompt enough?
Matters are more complicated when outputs resemble works in the training data. If the resemblance is based only on general style or content, it is unlikely to violate copyright, because style is not copyrightable.
The illustrator Hollie Mengert encountered this issue firsthand when her unique style was mimicked by generative AI engines in a way that did not capture what, in her eyes, made her work unique. Meanwhile, the singer Grimes embraced the tech, “open-sourcing” her voice and encouraging fans to create songs in her style using generative AI.
If an output contains major elements from a work in the training data, it might infringe on that work’s copyright. Recently, the Supreme Court ruled that Andy Warhol’s drawing of a photograph was not permitted by fair use. That means that using AI to just change the style of a work – say, from a photo to an illustration – is not enough to claim ownership over the modified output.
While copyright law tends to favor an all-or-nothing approach, scholars at Harvard Law School have proposed new models of joint ownership that allow artists to gain some rights in outputs that resemble their works.
In many ways, generative AI is yet another creative tool that allows a new group of people access to image-making, just like cameras, paintbrushes or Adobe Photoshop. But a key difference is this new set of tools relies explicitly on training data, and therefore creative contributions cannot easily be traced back to a single artist.
The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression.
Enabling Better Drug Discovery Outcomes with Machine Learning
Can the long road to bring new medical treatments or therapies to market be shortened by introducing artificial intelligence? AI applied to the early stage of the discovery process, which often involves new insight into a disease or treatment mechanism, may soon provide researchers many more potential candidates or designs to evaluate. AI can also help in the sorting and evaluation of these candidates to improve the success rates of those that make it into the lab for further study.
Benefits AI Brings to Biotech Research
The cost of bringing a single drug to market in terms of time and money is substantial. Estimates are in the $2.8 billion range, and the average timeline for drug development exceeds a decade. On top of this, there is a low level of certainty of taking a promising molecule all the way to market. The success rate of translating preclinical research findings into effective clinical treatments is low; failure rates are estimated to be around 90%.
The refinement of digital sorting and calculating with advanced computational technologies, such as artificial intelligence (AI) and machine learning (ML), have the potential to revolutionize pharmaceutical research and development (R&D). Despite it still being a young technology, AI-enabled applications and algorithms are already making an impact in drug discovery and development processes.
One of the significant benefits of ML in drug development is its ability to recognize patterns and unveil insights that might be missed by conventional data analysis or take substantially less time to recognize. AI, and ML technologies can help a biotech company do precursory evaluation, accelerate the design and testing of molecules, streamline the testing processes, and provide a faster understanding along the way if the molecule will perform as expected. With improved clinical success and reduced costs throughout the development pipeline, AI may be shot in the arm the industry needs.
Adoption of AI in Biotechnology
While any full-scale adoption of AI in the pharmaceutical industry is still evolving and finding its place, implementation and investment are growing. Top global pharmaceutical companies have increased their R&D investment in AI by nearly 25% over the past three years – this indicates a recognition of the perceived benefits.
The interest and investment in AI drug discovery is fueled by several factors. As touched on earlier, a more efficient and cost-effective drug development process would be of great benefit. AI can significantly reduce both time and cost. And the sooner more effective treatments are available, the better. Chronic diseases, such as cancer, autoimmune problems, neurological disorders, and cardiovascular diseases, creates an ongoing demand for improved drugs and therapies. AI’s ability to analyze vast amounts of data, identify patterns, and then learn from the information at an accelerated rate can allow researchers to shorten timelines to final conclusions.
Even more exciting is the growing availability of large datasets thanks to the rise of big data. With an increase in the volume, variety and velocity of data, and the AI-assisted ability to make sense of it, outcomes are expected to be improved. These datasets, obtained from various sources like electronic medical records and genomic databases, allow successful AI applications in drug discovery. Technological advancements, especially in ML algorithms, have been contributing to the growth of AI in medicine. And they are growing more sophisticated, allowing for accurate pattern identification in complex biological systems. Collaborations between academia, industry, and government agencies have further accelerated growth sharing knowledge and resources.
Trends in AI and ML Biotechnology
While considered a young technological field, AI-enabled drug discovery is being shaped by a number of new trends and technologies. Modern AI algorithms are now capable of analyzing intricate biological systems and foretelling the effects of medications on human cells and tissues. By detecting probable adverse effects early on in the development phase, the predictive ability helps prevent failures in the later stages.
By generating candidates that fit certain requirements, generative models can accelerate the design of completely new medications. But other technology is also now available to assist. By offering scalable processing resources, cloud computing dramatically cuts down on both time and expense. By simulating the interaction of hundreds of chemicals with disease targets, virtual drug screening enables the fast screening of drugs.
A higher understanding of disease biology and the discovery of new therapeutic targets is being made possible by integrative techniques that incorporate many data sources not available a short while ago.
Constraints on AI-Assisted Biotech Research
While AI can speed up certain aspects of drug discovery, it cannot replace most traditional lab testing. Hands-on experimentation and data collection on living organisms are expected to always be necessary, many of these processes during the clinical trial stages cannot be sped up.
Regulatory bodies, like the FDA, are also cautious about embracing AI fully, raising concerns about transparency and accountability in decision-making processes.
.
Take Away
The near future of artificial intelligence and machine learning assuming a larger role in enabling drug discovery and more efficient R&D looks bright. The technology offers real promise for more efficient and cost-effective drug development processes – this would address the need for new therapies for chronic diseases.
The time-consuming process of testing on real subjects is not expected to be replaced or overly streamlined by technology, but finding subjects and evaluating results can also benefit from the new technology.
Regulation on Artificial Intelligence Innovation is Dumb, Says IBM CEO
Is IBM’s CEO flip-flopping on the impact of artificial Intelligence on jobs? What are his thoughts on AI regulation? In an interview with Bloomberg last month, IBM’s CEO Arvind Krishna said his company would slow or suspend hiring because he expects about 30% of nearly 26,000 positions could be replaced by AI over a five-year period at his company, that was calculated as 7,800 supplanted by AI. In a new interview this month with Barron’s, Krishna addresses the expected impact on jobs and makes light of the idea that humanity is at great risk from AI technology.
Spoiler Alert, AI is Good
AI will actually create, not destroy jobs – and it is not going to destroy the world. This is the view of IBM’s CEO Krishna, discussing artificial Intelligence in a feature with Barron’s published in the most recent Tech Trader column.
Addressing AI job loss, which he has been quoted as expecting, at least for IBM, the blue-chip CEO says he’s somewhat irritated about a recent flurry of news stories that quoted him as saying that IBM could replace 7,800 workers with AI software. Krishna says his comments were taken out of context. He tried to set the record straight explaining, that he actually said was that over the next five years, 30% to 50% of repetitive white-collar jobs could be replaced by AI. He also added the bottom end of the range was most likely. The number used in the reporting came from his response to a follow-up question with the Bloomberg reporter. Krishna was asked how many IBM employees meet the description, he estimated 10%. The math, based on IBM’s workforce, was used to come up with 7,800 employees.
The IBM CEO says the calculation left out another key piece of information.
“I also said that AI is going to create more jobs than it takes away,” Krishna said, “the same way that the agricultural revolution and the services revolution created way more jobs than those that got taken away.” He says we will need new “prompt engineers” to lever AI tools, and more fact-checking to address the accelerated creation of misinformation that he thinks will inevitably accompany the creation of AI tools.
Are Their AI Risks?
On humanity’s risks from AI and potential regulation, the IBM CEO explained, the assertion by some that AI represents an existential threat to humanity—that there’s a risk that AI gets smarter than humans, and then somehow wipes us off the face of the Earth, is bunk – or highly unlikely.
“I’m not there fundamentally,” he said. “It seems like a pretty big stretch. These things are great at memorization and pattern matching. They don’t yet have a knowledge representation. They don’t have any symbolic manipulation. They do math by memory, not by understanding math. There are a lot of things that are yet to be done,” said Krishna.
It was noted in the interview that some people are using nightmare scenarios to make a case for strict regulation of artificial Intelligence and machine learning. While he agrees all should be careful about how and where they use AI, he doesn’t think aggressive regulation is called for. He even believes that any US regulation would cause cheaters within the States or competitive countries, without the imposition of US regulation, to move forward as they see appropriate.
“We don’t want to have regulation on innovation. That’s dumb actually,” said Krishna. “All you are going to do is give an advantage to those who choose to ignore the regulation and those who work outside the US boundaries.”
An AI Quantum Leap?
Quantum computing is a rapidly-emerging technology that harnesses the laws of quantum mechanics to solve problems too complex for classical computers. IBM is a leader in this field. As it relates to the potential combination of AI and quantum computing Krishna thinks we are close to the day when quantum computing will mesh with AI, and open new areas of computing power.
Krishna suggested that to explore benefits of this, one might want to consider the future of the chemical and pharmaceutical industry. “Maybe we go through reading all the literature—that’s AI—and we find some gaps in knowledge,” he says. “Today you fill in those gaps by doing a wet lab experiment which might take three to six months or more. In three to five years, a quantum computer will be able to simulate those experiments and fill in the gaps in a few minutes,” he said.
A key power of quantum computing is its ability to work on vast amounts of data at the same time. Looking further out, perhaps a decade, Krishna expects that quantum computers will be able to create AI models. “Current models with hundreds of billions of parameters can take two to three months to train on a very large cluster of GPUs. With a quantum computer, you’ll be able to train the same model overnight,” he said. His expectation is, “You’ll be able to solve problems that are far beyond what the biggest supercomputers in the world can do now.”
Take Away
As part of his position as CEO of IBM, Arvind Krishna has a window seat to much of the cutting-edge changes in computing technology, including artificial Intelligence. He suggests he was not fully quoted in the article that Bloomberg did covering AI and that the technology is expected to create jobs. It may, however alter available occupations while creating new needs.
He is not in the camp that massive regulation is needed to keep the innovative technology at bay; Krishna believes creative development is best when there is a level playing field.
How Can Congress Regulate AI? Erect Guardrails, Ensure Accountability and Address Monopolistic Power
OpenAI CEO Sam Altman urged lawmakers to consider regulating AI during his Senate testimony on May 16, 2023. That recommendation raises the question of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting. But what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks.
Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Anjana Susarla, Professor of Information Systems, Michigan State University.
As a researcher who studies social media and artificial intelligence, I believe that Altman’s suggestions have highlighted important issues but don’t provide answers in and of themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies’ economic power and political sway.
An Agency to Regulate AI?
Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman’s testimony. The European Union’s AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example.
The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks.
Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on some of the risks inherent in AI. The Consumer Product Safety Commission and other agencies have a role to play as well.
Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy.
Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers today.
Licensing Auditors, Not Companies
Though OpenAI’s Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with humanlike intelligence that could pose a threat to humanity. That would be akin to companies being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a role to play well before such a futuristic scenario comes to pass.
Algorithmic auditing would require credentialing, standards of practice and extensive training. Requiring accountability is not just a matter of licensing individuals but also requires companywide standards and practices.
Experts on AI fairness contend that issues of bias and fairness in AI cannot be addressed by technical methods alone but require more comprehensive risk mitigation practices such as adopting institutional review boards for AI. Institutional review boards in the medical field help uphold individual rights, for example.
Academic bodies and professional societies have likewise adopted standards for responsible use of AI, whether it is authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.
Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It’s also important to recognize that greater data accountability and transparency may impose new restrictions on organizations.
Scholars of data privacy and AI ethics have called for “technological due process” and frameworks to recognize harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance and health care calls for licensing and audit requirements to ensure procedural fairness and privacy safeguards.
Requiring such accountability provisions, though, demands a robust debate among AI developers, policymakers and those who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the appearance of compliance.
AI Monopolies?
What was also missing in Altman’s testimony is the extent of investment required to train large-scale AI models, whether it is GPT-4, which is one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for developing the world’s largest language models.
Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale.
It is also important to acknowledge that the training data for tools such as ChatGPT includes the intellectual labor of a host of people such as Wikipedia contributors, bloggers and authors of digitized books. The economic benefits from these tools, however, accrue only to the technology corporations.
Proving technology firms’ monopoly power can be difficult, as the Department of Justice’s antitrust case against Microsoft demonstrated. I believe that the most feasible regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for AI firms and users of AI alike, to urge comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.
Sound Ventures is More Evidence of How Much Capital AI is Attracting
Around investment circles, it is not unusual to debate what Cathie Wood is touting as disruptive and “the next big thing.” Another hedge fund manager that gets that kind of attention is Michael Burry, whose Tweets are followed by an army of “Burryologists” working to decode his words or advice. But the watercooler talk never before included Ashton Kutcher – at least not until now. The actor, originally made famous by his portrayal of a less-than-intelligent character on the TV sitcom, That 70’s Show, is a partner in a successful hedge fund he co-founded.
The firm, Sound Ventures, made news this month with an announcement that it had closed a $240 million artificial intelligence (AI) fund. The stated purpose of the fund is to invest in early-stage AI companies that have the potential to make a significant impact on life. Part of Kutcher’s interest in AI is what he believes the technology’s has potential to solve some of the world’s biggest problems. His list includes poverty, disease, and climate change. Kutcher and his business partner, Guy Oseary, say they are committed to investing in AI companies that are working to make a positive impact on the world.
Since 2015, Sound Ventures has invested in a number of successful companies that, include tech disruptors Airbnb, Spotify, and Uber. As far as AI, Kutcher says he is most interested in AI companies that have the potential to revolutionize various industries, the focus being companies that are at the forefront of this technology.
The venture Capital (VC) firm announced in early May that it closed the Sound Ventures AI Fund. The fund was oversubscribed by nearly $240 million. It is noteworthy that C3.AI, the heart of ChatGPT, has a market cap of just 10 to 12 times this amount. So while this may not sound like a huge sum, it is significant relative to the size of the companys the fund may invest in.
The fund seeks to invest in AI businesses at the foundation model layer. Currently, the fund’s portfolio of companies includes OpenAI, Anthropic and StabilityAI. Kutcher, Oseary and Effie Epstein lead Sound Ventures as general partners.
Sound Ventures has already been investing in AI for the past decade, “and we believe that this moment in history will dictate the trajectory of this technology,” Epstein said. “Our team is well positioned to continue investing in and supporting exceptional founders that are thoughtfully shaping the future through artificial intelligence.”
Take Away
Famous investors and famous people have the ability to draw attention to their investment activities. Actor Ashton Kutcher, his neighbor Guy Oseary, who is a talent manager, and Effie Epstein, who was formerly head of investor relations at iHeartMedia, are able to draw a good deal of attention to their investment work.
The activity of Sound Ventures also demonstrates the ability to raise capital for anything that is tied to artificial intelligence.
Understanding the Distinction between Algorithm-Driven Functionality and Artificial Intelligence
Technological advancement doesn’t sleep. Rapidly evolving and unfolding, it is hard to keep up with the difference between, machine learning, artificial intelligence, and generative AI. Natural language processing and speech recognition also have massive overlaps, but are definitively different. Two “whiz-bang” technologies that are often confused, or at least the words have been used interchangeably are “artificial intelligence” and “algorithm-driven functionality.” While both concepts contribute to the advancement of technology, one would fall behind if they don’t understand the distinctions. Below we aim to clarify the dissimilarities between algorithm-driven functionality and artificial intelligence functionality, shedding light on their unique characteristics and applications will help investors understand the nature of companies they may be evaluating.
Algorithm-Driven Functionality
Algorithm-driven functionality primarily relies on predefined rules and step-by-step instructions to accomplish specific tasks. An algorithm is a sequence of logical instructions designed to solve a particular problem or achieve a specific outcome. Algorithms have been utilized for centuries, even before the advent of computers, to solve mathematical problems and perform calculations.
In state of the art technology, algorithms continue to play a crucial role. They are employed in search engines to rank web pages, in recommendation systems to suggest personalized content, in market analysis to indicate potential trades, and in sorting to organize data efficiently. Algorithm-driven functionality typically operates within predefined parameters, making it predictable and deterministic.
While algorithms are powerful tools, they lack the ability to learn or adapt to new situations. They require explicit instructions to perform tasks and cannot make decisions based on contextual understanding or real-time data analysis. Therefore, algorithm-driven systems may not best fit a complex situation with dynamic scenarios that demand flexibility and adaptability.
Artificial Intelligence Functionality
Artificial intelligence encompasses a broader set of technologies that enable machines to simulate human intelligence. AI systems possess the ability to perceive, reason, learn, and make decisions autonomously. Unlike algorithm-driven functionality, AI algorithms are capable of adapting and improving their performance through continuous learning from data.
Eventually they can have a mind of their own.
Machine learning (ML) is a prominent subset of AI that empowers algorithms to automatically learn patterns and insights from vast amounts of data. By analyzing historical information, ML algorithms can identify trends, make predictions, and generate valuable insights. Deep learning, a specialized branch of ML, employs artificial neural networks to process large datasets and extract intricate patterns, allowing AI systems to perform complex tasks such as image recognition and natural language processing.
AI functionality can be found in various applications across different sectors. Chatbots like ChatGPT can understand and respond to human queries, autonomous vehicles navigate and react to their surroundings, and recommendation systems that provide personalized suggestions are all examples of AI-driven technologies. These systems are capable of adapting to changing circumstances, improving their performance over time, and addressing complex, real-world challenges.
Differentiating Factors
The key distinction between algorithm-driven functionality and AI functionality lies in their capability to adapt and learn. While algorithms are rule-based and operate within predefined boundaries, AI algorithms possess the ability to learn from data, identify patterns, and modify their behavior accordingly. AI algorithms can recognize context, make informed decisions, and navigate uncharted territory with limited explicit instructions.
What freightens many is AI functionality exhibits a higher degree of autonomy compared to algorithm-driven systems. AI algorithms can analyze and interpret complex data, extract meaningful insights, and make decisions in real-time without relying on explicit instructions or human intervention. This autonomy enables AI systems to operate in dynamic environments where rules may not be explicitly defined, making them suitable for tasks that require adaptability and learning.
Take Away
Algorithm-driven functionality and artificial intelligence functionality are distinct concepts within the realm of technology. While algorithm-driven systems rely on predefined rules and instructions, AI functionality encompasses a broader set of technologies that enable machines to simulate human intelligence, adapt to new situations, and learn from data. Understanding these differences is crucial for leveraging the strengths of each approach for a given solution and harnessing the full potential of technology to solve complex problems and drive innovation to provide solutions and benefit.
ChatGPT-Powered Wall Street: The Benefits and Perils of Using Artificial Intelligence to Trade Stocks and Other Financial Instruments
Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.
And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.
I’ve been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street’s past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.
This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of,Pawan Jain, Assistant Professor of Finance, West Virginia University.
Program Trading Fuels Black Monday
In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.
Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it’s composed of.
As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largely unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.
Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.
In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.
HFT: Program Trading on Steroids
Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.
HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity – a difference in price of similar securities that can be exploited for profit – high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.
These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.
Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.
Benefits of AI Trading
These AI-based, high-frequency traders operate very differently than people do.
The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.
And, so, just like most technologies, HFT provides several benefits to stock markets.
These traders typically buy and sell assets at prices very close to the market price, which means they don’t charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.
High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.
Stock exchanges used to be packed with traders buying and selling securities, as in this scene from 1983. Today’s trading floors are increasingly empty as AI-powered computers handle more and more of the work.
The Downsides
But speed and efficiency can also cause harm.
HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.
Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.
Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility – a measure of how rapidly and unpredictably prices move up and down – increased significantly after the introduction of HFT.
The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.
In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That’s because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.
This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.
Enter ChatGPT
That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.
In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone’s deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.
Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.
Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.
This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.
In addition, since market crashes are relatively rare, there isn’t much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.
For now, at least, it seems most banks won’t be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.
But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up – and there’s a risk of being left behind by rivals.
Artificial intelligence (AI) is a true disruptive technology. As any informed content writer can tell you, the technology creates efficiencies by speeding up data gathering, research, and even graphics that specifically reflect the content. As an example, it is arguably quicker to use ChatGPT to provide a list of ticker symbols from company names, than it is to look them up one by one. With these small time savers, over the course of a week, far more can be produced as a result of AI tools saving a few minutes here and there.
This presents the question, what are the limits of AI – what can’t it do?
Worker Displacement
Technological revolutions have always benefitted humankind in the long run; in the short run, they have been disruptive, often displacing people who then have to retrain.
A new Goldman Sachs report says “significant disruption” could be on the horizon for the labor market. Goldman’s analysis of jobs in the U.S. and Europe shows that two-thirds of jobs could be automated at least to some degree. In the U.S., “of those occupations which are exposed, most have a significant — but partial — share of their workload (25-50%) that can be replaced,” Goldman Sachs’ analysts said in the paper.
Around the world, as many as 300 million jobs could be affected, the report says. Changes to labor markets are therefore likely – although historically, technological progress doesn’t just make jobs redundant, it also creates new ones. And the added productivity allows the masses to live wealthier lives. This clearly was the end result of the industrial revolution, and years after the computer revolution, we are at a high rate of employment and have at our fingertips much which we never even dreamed.
The Goldman report says the use of AI technology could boost labor productivity growth and boost global GDP by as much as 7% over time.
There are few reasons to expect that the AI revolution won’t also provide more goods and services per person for a richer existence. But, what about the disruption in the interim? I was curious to know what artificial intelligence is not expected to be able to do. There isn’t much information out there, so I went to an AI source and fed it a bunch of pointed questions about its nature. Part of that nature is to not intentionally lie, I found the responses worth sharing as we will all soon be impacted by what the technology can and cannot do.
Limitations of AI that Will Persist
Artificial intelligence has come a long way in recent years and the speed of progression and adoption is accelerating. As a result, applications have become increasingly sophisticated. But, there are still many things that AI cannot do now and may never be able to do.
One thing that AI cannot do now and may never be able to do is to truly understand human emotions and intentions. While AI algorithms can detect patterns in data and recognize certain emotional expressions, they do not have the ability to experience emotions themselves. This means that AI cannot truly understand the nuances of human communication, which can lead to misinterpretation and miscommunication.
Another limitation of AI is that it cannot replicate the creativity and intuition of humans. While AI can generate new ideas based on existing data, it lacks the ability to come up with truly original and innovative ideas. This is because creativity and intuition are often based on a combination of experience, emotion, and imagination, which are difficult to replicate in a machine.
AI also struggles with tasks that require common sense reasoning or context awareness. For example, AI may be able to identify a picture of a cat, but it may struggle to understand that a cat is an animal that can be petted or that it can climb trees. This is because AI lacks the contextual understanding that humans have built up through years of experience and interaction with the world around us.
In the realm of stocks and economics, AI has shown promise in analyzing data and making predictions, but there are still limitations to its abilities. For example, AI can analyze large datasets and identify patterns in market trends, but it cannot account for unexpected events or human behavior that may affect the market. This means that while AI can provide valuable insights, it cannot guarantee accurate predictions or prevent market volatility.
Another limitation of AI in economics is its inability to understand the complexities of social and political systems. Economic decisions are often influenced by social and political factors, such as government policies and public opinion. While AI can analyze economic data and identify correlations, it lacks the ability to understand the underlying social and political context that drives economic decisions.
A concern some have about artificial intelligence is that it may perpetuate biases that exist in the data it analyzes. This is the “garbage in, garbage out” data problem on steroids. For example, if historical data on stock prices is biased towards a certain demographic or industry, AI algorithms may replicate these biases in their predictions. This can lead to an amplified bias that proves faulty and not useful for economic decision making.
Take Away
AI has shown remarkable progress in recent years, but, as with everything that came before, there are still things that it cannot do now and may never be able to do. AI lacks the emotional intelligence, creativity, and intuition of humans, as well as common sense reasoning and social and political systems. In economics and stock market analysis, AI can provide valuable insights, but it cannot assure accurate predictions or prevent market volatility. So while companies are investing in ways to make our lives more productive with artificial intelligence and machine learning, it remains important to invest in our own human intelligence, growth and expertise.
One Stop Systems, Inc. (OSS) designs and manufactures innovative AI Transportable edge computing modules and systems, including ruggedized servers, compute accelerators, expansion systems, flash storage arrays, and Ion Accelerator™ SAN, NAS, and data recording software for AI workflows. These products are used for AI data set capture, training, and large-scale inference in the defense, oil and gas, mining, autonomous vehicles, and rugged entertainment applications. OSS utilizes the power of PCI Express, the latest GPU accelerators and NVMe storage to build award-winning systems, including many industry firsts, for industrial OEMs and government customers. The company enables AI on the Fly® by bringing AI datacenter performance to ‘the edge,’ especially on mobile platforms, and by addressing the entire AI workflow, from high-speed data acquisition to deep learning, training, and inference. OSS products are available directly or through global distributors. For more information, go to www.onestopsystems.com.
Joe Gomes, Managing Director, Equity Research Analyst, Generalist , Noble Capital Markets, Inc.
Joshua Zoepfel, Research Associate, Noble Capital Markets, Inc.
Refer to the full report for the price target, fundamental analysis, and rating.
New Award. One Stop Systems received an initial order from a new military prime contractor for OSS 3U short-depth servers (SDS) for use by a U.S. Air Force anti-electronic warfare system. OSS has already commenced shipments under an initial purchase order. This program is the company’s first with this prime contractor. It is valued at approximately $3.5 million over the next three years.
SDS. The servers feature proprietary OSS Gen 4 PCI express NVMe controllers, OSS transportable hot-swap drive canisters, and NVMe SSDs that support government encryption standards. The servers are expected to serve as a head storage node for data collection located at U.S. Air Force ground stations that house military aircraft. They will be capable of recording large volumes of simulation data and deliver it at high speeds with low latency to data scientists on the network.
Equity Research is available at no cost to Registered users of Channelchek. Not a Member? Click ‘Join’ to join the Channelchek Community. There is no cost to register, and we never collect credit card information.
This Company Sponsored Research is provided by Noble Capital Markets, Inc., a FINRA and S.E.C. registered broker-dealer (B/D).
*Analyst certification and important disclosures included in the full report. NOTE: investment decisions should not be based upon the content of this research summary. Proper due diligence is required before making any investment decision.