AMD Will Acquire AI Software Specialist Nod.ai Amid Mixed Tech IPO Environment

AMD announced Monday that it will acquire Nod.ai, an expert in optimized artificial intelligence (AI) software solutions. The deal aims to boost AMD’s capabilities in open-source AI development tools, compilers, and models tuned for AMD data center, PC, gaming and graphics chips.

The acquisition comes during a rocky period for initial public offerings in the technology sector. Chip designer Arm Holdings, which recently went public, has seen its shares drop below its IPO price as investors grow concerned over tech valuations and growth prospects in a turbulent market.

Nod.ai: Boosting AMD’s AI Software Expertise

San Jose-based Nod.ai has developed industry-leading software that speeds the deployment of AI workloads optimized for AMD hardware, including Epyc server CPUs, Radeon gaming graphics, and Instinct data center GPUs.

Nod.ai maintains and contributes to vital open-source AI repositories used by developers and engineers globally. It also works closely with hyperscale cloud providers, enterprises and startups to deploy robust AI solutions.

AMD gains both strategic technology and rare AI software expertise through Nod.ai’s highly experienced engineering team. Nod.ai’s compiler and automation capabilities reduce the complexity of optimizing and deploying high-performance AI models across AMD’s product stack.

Market Tailwinds for AI Innovation

The pickup in AI workload optimization comes at a time when machine learning and deep learning are being rapidly adopted across industries. AI-optimized hardware and software will be critical to support resource-intensive models and deliver speed, accuracy and scalability.

AMD is looking to capitalize on this demand through its unified data center GPU architecture for AI acceleration. Meanwhile, rival Nvidia dominates the data center GPU space crucial for AI computing power.

Arm IPO Capitulates Amid Market Jitters

UK-based Arm Holdings, which supplies intellectual property for chips used in devices like smartphones, recently conducted a $40 billion IPO, one of the largest listings of 2023. However, Arm’s share price plunged below its IPO level soon after debuting in September.

The weak stock performance highlights investor skittishness around loss-making tech firms amid economic headwinds. ARM’s licensing model also faces risks as major customers like Apple and Qualcomm develop their own proprietary chip technologies and architectures.

Unlike Arm, AMD is on solid financial footing, with its data center and gaming chips seeing strong uptake. However, AMD must still convince Wall Street that its growth trajectory warrants robust valuations, especially as Intel mounts a comeback.

Betting on Open Software Innovation

AMD’s Nod.ai purchase aligns with its strategic focus on open software ecosystems that promote accessibility and standardization for AI developers. Open software and hardware foster collaborative innovation within the AI community.

With Nod.ai’s talents added to the mix, AMD is betting it can democratize and optimize AI workload deployment across the full range of AMD-powered devices – from data center CPUs and GPUs to client PCs, gaming consoles and mobile chipsets.

If successful, AMD could carve out an advantage as the preferred AI acceleration platform based on open software standards. This contrasts with Nvidia’s proprietary approaches and closed ecosystems tailored exclusively for its GPUs.

As AI permeates across industries and applications, AMD is making the right long-term bet on open software innovation to unlock the next phase of computing.

Amazon Bets Big on AI Startup to Advance Generative Tech

E-commerce titan Amazon is making a huge investment into artificial intelligence startup Anthropic, injecting up to $4 billion into the budding firm. The massive funding underscores Amazon’s ambitions to be a leader in next-generation AI capabilities.

Anthropic is a two-year old startup launched by former executives from AI lab OpenAI. The company recently introduced its new chatbot called Claude, designed to converse naturally with humans on a range of topics.

While Claude has similarities to OpenAI’s popular ChatGPT, Anthropic aims to take natural language AI to the next level. Amazon’s investment signals its belief in Anthropic’s potential to pioneer groundbreaking generative AI.

Generative AI refers to AI systems that can generate new content like text, images, or video based on data they are trained on. The technology has exploded in popularity thanks to ChatGPT and image generator DALL-E 2, sparking immense interest from Big Tech.

Amazon is positioning itself to capitalize on this surging interest in generative AI. As part of the deal, Amazon Web Services will become Anthropic’s primary cloud platform for developing and delivering its AI services.

The startup will also let AWS customers access exclusive features to customize and fine-tune its AI models. This tight integration gives Amazon a competitive edge by baking Anthropic’s leading AI into its cloud offerings.

Additionally, Amazon will provide custom semiconductors to turbocharge training for Anthropic’s foundational AI models. These chips aim to challenged Nvidia’s dominance in supplying GPUs for AI workloads.

With its end-to-end AI capabilities across hardware, cloud services and applications, Amazon aims to be the go-to AI provider. The Anthropic investment caps off a flurry of activity from Amazon to own the AI future.

Recently, Amazon unveiled Alexa Voice, AI-generated voice assistant. The company also launched Amazon Bedrock, a service enabling companies to easily build custom AI tools using Amazon’s machine learning models.

And Amazon Web Services already offers robust AI services like image recognition, language processing, and data analytics to business clients. Anthropic’s generative smarts will augment these solutions.

The race to lead in AI accelerated after Microsoft’s multi-billion investment into ChatGPT creator OpenAI in January. Google, Meta and others have since poured billions into AI startups to not get left behind.

Anthropic has already raised funding from top tier backers like Google’s VC arm and Salesforce Ventures. But Amazon’s monster investment catapults the startup into an elite group of AI startups tapping into Big Tech’s cash reserves.

The deal grants Amazon a minority stake in the startup, suggesting further collaborations ahead. With Claude 2 generating buzz, Anthropic’s next-gen AI technology and Amazon’s vast resources could be a potent combination.

For Amazon, owning a piece of a promising AI startup hedges its bets should generative AI disrupt major industries. And if advanced chatbots like Claude reshape how customers interact with businesses, Amazon is making sure it has skin in the game.

The e-commerce behemoth’s latest Silicon Valley splash cements its position as an aggressive AI player not content following others. If Amazon’s bet on Anthropic pays off, it may pay dividends in making Amazon a go-to enterprise AI powerhouse.

Tesla’s Dojo Supercomputer Presents Massive Upside for Investors

Tesla’s new Dojo supercomputer could unlock tremendous value for investors, according to analysts at Morgan Stanley. The bank predicts Dojo could boost Tesla’s market valuation by over $600 billion.

Morgan Stanley set a sky-high 12-18 month price target of $400 per share for Tesla based on Dojo’s potential. This implies a market cap of $1.39 trillion, which is nearly 76% above Tesla’s current $789 billion valuation.

Tesla only began producing Dojo in July 2022 but plans to invest over $1 billion in the powerful supercomputer over the next year. Dojo will be used to train artificial intelligence models for autonomous driving.

Morgan Stanley analysts estimate Dojo could enable robotaxis and software services that extend far beyond Tesla’s current business of vehicle manufacturing. The bank nearly doubled its 2040 revenue projection for Tesla’s network services division from $157 billion to $335 billion thanks to Dojo.

By licensing self-driving software powered by Dojo to third-party transportation fleets, Tesla could generate tremendous high-margin revenues. Morgan Stanley sees network services delivering over 60% of Tesla’s core earnings by 2040, up from just 30% in 2030.

Thanks to this upside potential, Morgan Stanley upgraded Tesla stock from Equal-Weight to Overweight. The analysts stated “Dojo completely changes the growth trajectory for Tesla’s autonomy business.”

At its current $248.50 share price, Tesla trades at a lofty forward P/E ratio of 57.9x compared to legacy automakers like Ford at 6.3x and GM at 4.6x. But if Morgan Stanley’s bull case proves accurate, Tesla could rapidly grow into its valuation over the next decade.

In summary, Tesla’s AI advantage with Dojo makes the stock’s premium valuation more reasonable. Investors buying at today’s prices could reap huge gains if Dojo unlocks a new $600 billion revenue stream in autonomous mobility services.

The Power and Potential of Dojo

Dojo represents a massive investment by Tesla as it aims to lead the future of autonomous driving. The specialized supercomputer is designed to train deep neural networks using vast amounts of visual data from Tesla’s fleet of vehicles.

This differentiated AI training will allow Tesla to improve perceptions for full self-driving at a faster pace. As self-driving functionality becomes more robust, Tesla can unlock new revenue opportunities.

Morgan Stanley analyst Adam Jones stated: “If Dojo can help make cars ‘see’ and ‘react,’ what other markets could open up? Think of any device at the edge with a camera that makes real-time decisions based on its visual field.”

Dojo’s processing power will permit Tesla to develop advanced simulations that speed up testing. The supercomputer’s capacity is expected to exceed that of the top 200 fastest supercomputers combined.

Tesla claims Dojo will drive down the costs of training networks by orders of magnitude. This efficiency can translate into higher margins as costs drop for autonomous AI development.

Dojo was designed in-house by Tesla AI director Andrej Karpathy and his team. Karpathy called Dojo the “most exciting thing I’ve seen in my career.” With Dojo, Tesla is aiming to reduce reliance on external cloud providers like Google and Amazon.

Morgan Stanley Boosts Tesla Price Target by 60%

The potential of monetizing Tesla’s self-driving lead through Dojo led analysts at Morgan Stanley to dramatically increase their expectations.

Led by analyst Adam Jones, Morgan Stanley boosted its 12-18 month price target on Tesla stock by 60% to $400 per share. This new level implies a market value for Tesla of nearly $1.39 trillion.

Hitting this price target would mean Tesla stock gaining about 76% from its current level around $248.50. Tesla shares jumped 6% on Monday following the report as investors reacted positively.

Jones explained the sharply higher price target by stating: “Dojo completely changes the growth trajectory for Tesla’s autonomy business.”

He expects Dojo will open up addressable markets for Tesla that “extend well beyond selling vehicles at a fixed price.” In other words, Dojo can turn Tesla into more of a high-margin software and services provider.

Take a look at One Stop Systems (OSS), a US-based company that designs and manufactures AI Transportable edge computing modules and systems that are used in autonomous vehicles.

AI Model Can Help Determine Where a Patient’s Cancer Arose

Prediction Model Could Enable Targeted Treatments for Difficult Tumors

Anne Trafton | MIT News

For a small percentage of cancer patients, doctors are unable to determine where their cancer originated. This makes it much more difficult to choose a treatment for those patients, because many cancer drugs are typically developed for specific cancer types.

A new approach developed by researchers at MIT and Dana-Farber Cancer Institute may make it easier to identify the sites of origin for those enigmatic cancers. Using machine learning, the researchers created a computational model that can analyze the sequence of about 400 genes and use that information to predict where a given tumor originated in the body.

Using this model, the researchers showed that they could accurately classify at least 40 percent of tumors of unknown origin with high confidence, in a dataset of about 900 patients. This approach enabled a 2.2-fold increase in the number of patients who could have been eligible for a genomically guided, targeted treatment, based on where their cancer originated.

“That was the most important finding in our paper, that this model could be potentially used to aid treatment decisions, guiding doctors toward personalized treatments for patients with cancers of unknown primary origin,” says Intae Moon, an MIT graduate student in electrical engineering and computer science who is the lead author of the new study.

Mysterious Origins

In 3 to 5 percent of cancer patients, particularly in cases where tumors have metastasized throughout the body, oncologists don’t have an easy way to determine where the cancer originated. These tumors are classified as cancers of unknown primary (CUP).

This lack of knowledge often prevents doctors from being able to give patients “precision” drugs, which are typically approved for specific cancer types where they are known to work. These targeted treatments tend to be more effective and have fewer side effects than treatments that are used for a broad spectrum of cancers, which are commonly prescribed to CUP patients.

“A sizeable number of individuals develop these cancers of unknown primary every year, and because most therapies are approved in a site-specific way, where you have to know the primary site to deploy them, they have very limited treatment options,” Gusev says.

Moon, an affiliate of the Computer Science and Artificial Intelligence Laboratory who is co-advised by Gusev, decided to analyze genetic data that is routinely collected at Dana-Farber to see if it could be used to predict cancer type. The data consist of genetic sequences for about 400 genes that are often mutated in cancer. The researchers trained a machine-learning model on data from nearly 30,000 patients who had been diagnosed with one of 22 known cancer types. That set of data included patients from Memorial Sloan Kettering Cancer Center and Vanderbilt-Ingram Cancer Center, as well as Dana-Farber.

The researchers then tested the resulting model on about 7,000 tumors that it hadn’t seen before, but whose site of origin was known. The model, which the researchers named OncoNPC, was able to predict their origins with about 80 percent accuracy. For tumors with high-confidence predictions, which constituted about 65 percent of the total, its accuracy rose to roughly 95 percent.

After those encouraging results, the researchers used the model to analyze a set of about 900 tumors from patients with CUP, which were all from Dana-Farber. They found that for 40 percent of these tumors, the model was able to make high-confidence predictions.

The researchers then compared the model’s predictions with an analysis of the germline, or inherited, mutations in a subset of tumors with available data, which can reveal whether the patients have a genetic predisposition to develop a particular type of cancer. The researchers found that the model’s predictions were much more likely to match the type of cancer most strongly predicted by the germline mutations than any other type of cancer.

Guiding Drug Decisions

To further validate the model’s predictions, the researchers compared data on the CUP patients’ survival time with the typical prognosis for the type of cancer that the model predicted. They found that CUP patients who were predicted to have cancer with a poor prognosis, such as pancreatic cancer, showed correspondingly shorter survival times. Meanwhile, CUP patients who were predicted to have cancers that typically have better prognoses, such as neuroendocrine tumors, had longer survival times.

Another indication that the model’s predictions could be useful came from looking at the types of treatments that CUP patients analyzed in the study had received. About 10 percent of these patients had received a targeted treatment, based on their oncologists’ best guess about where their cancer had originated. Among those patients, those who received a treatment consistent with the type of cancer that the model predicted for them fared better than patients who received a treatment typically given for a different type of cancer than what the model predicted for them.

Using this model, the researchers also identified an additional 15 percent of patients (2.2-fold increase) who could have received an existing targeted treatment, if their cancer type had been known. Instead, those patients ended up receiving more general chemotherapy drugs.

“That potentially makes these findings more clinically actionable because we’re not requiring a new drug to be approved. What we’re saying is that this population can now be eligible for precision treatments that already exist,” Gusev says.

The researchers now hope to expand their model to include other types of data, such as pathology images and radiology images, to provide a more comprehensive prediction using multiple data modalities. This would also provide the model with a comprehensive perspective of tumors, enabling it to predict not just the type of tumor and patient outcome, but potentially even the optimal treatment.

Alexander Gusev, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, is the senior author of the paper, which appeared on August 7, 2023, in Nature Medicine.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

How You Can Future-Proof Your Career in the Era of AI

Critical Thinking and Analytical Skills Will Not Easily Be Replaced

Ever since the industrial revolution, people have feared that technology would take away their jobs. While some jobs and tasks have indeed been replaced by machines, others have emerged. The success of ChatGPT and other generative artificial intelligence (AI) now has many people wondering about the future of work – and whether their jobs are safe.

A recent poll found that more than half of people aged 18-24 are worried about AI and their careers. The fear that jobs might disappear or be replaced through automation is understandable. Recent research found that a quarter of tasks that humans currently do in the US and Europe could be automated in the coming years.

The increased use of AI in white-collar workplaces means the changes will be different to previous workplace transformations. That’s because, the thinking goes, middle-class jobs are now under threat.

The future of work is a popular topic of discussion, with countless books published each year on the topic. These books speak to the human need to understand how the future might be shaped.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Elisabeth Kelan, Professor of Leadership and Organization, University of Essex.

I analyzed 10 books published between 2017 and 2020 that focused on the future of work and technology. From this research, I found that thinking about AI in the workplace generally falls into two camps. One is expressed as concern about the future of work and security of current roles – I call this sentiment “automation anxiety”. The other is the hope that humans and machines collaborate and thereby increase productivity – I call this “augmentation aspiration”.

Anxiety and Aspiration

I found a strong theme of concern in these books about technology enabling certain tasks to be automated, depriving many people of jobs. Specifically, the concern is that knowledge-based jobs – like those in accounting or law – that have long been regarded as the purview of well-educated professionals are now under threat of replacement by machines.

Automation undermines the idea that a good education will secure a good middle-class job. As economist Richard Baldwin points out in his 2019 book, The Globotics Upheaval, if you’ve invested a significant amount of money and time on a law degree – thinking it is a skill set that will keep you permanently employable – seeing AI complete tasks that a junior lawyer would normally be doing, at less cost, is going to be worrisome.

But there is another, more aspirational way to think about this. Some books stress the potential of humans collaborating with AI, to augment each other’s skills. This could mean working with robots in factories, but it could also mean using an AI chatbot when practicing law. Rather than being replaced, lawyers would then be augmented by technology.

In reality, automation and augmentation co-exist. For your future career, both will be relevant.

Future-Proofing Yourself

As you think about your own career, the first step is to realize that some automation of tasks is most likely going to be something you’ll have to contend with in the future.

In light of this, learning is one of the most important ways you can future-proof your career. But should you spend money on further education if the return on investment is uncertain?

It is true that specific skills risk becoming outdated as technology develops. However, more than learning specific abilities, education is about learning how to learn – that is, how to update your skills throughout your career. Research shows that having the ability to do so is highly valuable at work.

This learning can take place in educational settings, by going back to university or participating in an executive education course, but it can also happen on the job. In any discussion about your career, such as with your manager, you might want to raise which additional training you could do.

Critical thinking and analytical skills are going to be particularly central for how humans and machines can augment one another. When working with a machine, you need to be able to question the output that is produced. Humans are probably always going to be central to this – you might have a chatbot that automates parts of legal work, but a human will still be needed to make sense of it all.

Finally, remember that when people previously feared jobs would disappear and tasks would be replaced by machines, this was not necessarily the case. For instance, the introduction of automated teller machines (ATMs) did not eliminate bank tellers, but it did change their tasks.

Above all, choose a job that you enjoy and keep learning – so that if you do need to change course in the future, you know how to.

ChatGPT Shortcomings Include Hallucinations, Bias, and Privacy Breaches

Full Disclosure of Limitations May Be the Quick Fix to AI Limitations

The Federal Trade Commission has launched an investigation of ChatGPT maker OpenAI for potential violations of consumer protection laws. The FTC sent the company a 20-page demand for information in the week of July 10, 2023. The move comes as European regulators have begun to take action, and Congress is working on legislation to regulate the artificial intelligence industry.

The FTC has asked OpenAI to provide details of all complaints the company has received from users regarding “false, misleading, disparaging, or harmful” statements put out by OpenAI, and whether OpenAI engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm. The agency has asked detailed questions about how OpenAI obtains its data, how it trains its models, the processes it uses for human feedback, risk assessment and mitigation, and its mechanisms for privacy protection.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Anjana Susarla, Professor of Information Systems, Michigan State University.

As a researcher of social media and AI, I recognize the immensely transformative potential of generative AI models, but I believe that these systems pose risks. In particular, in the context of consumer protection, these models can produce errors, exhibit biases and violate personal data privacy.

Hidden Power

At the heart of chatbots such as ChatGPT and image generation tools such as DALL-E lies the power of generative AI models that can create realistic content from text, images, audio and video inputs. These tools can be accessed through a browser or a smartphone app.

Since these AI models have no predefined use, they can be fine-tuned for a wide range of applications in a variety of domains ranging from finance to biology. The models, trained on vast quantities of data, can be adapted for different tasks with little to no coding and sometimes as easily as by describing a task in simple language.

Given that AI models such as GPT-3 and GPT-4 were developed by private organizations using proprietary data sets, the public doesn’t know the nature of the data used to train them. The opacity of training data and the complexity of the model architecture – GPT-3 was trained on over 175 billion variables or “parameters” – make it difficult for anyone to audit these models. Consequently, it’s difficult to prove that the way they are built or trained causes harm.

Hallucinations

In language model AIs, a hallucination is a confident response that is inaccurate and seemingly not justified by a model’s training data. Even some generative AI models that were designed to be less prone to hallucinations have amplified them.

There is a danger that generative AI models can produce incorrect or misleading information that can end up being damaging to users. A study investigating ChatGPT’s ability to generate factually correct scientific writing in the medical field found that ChatGPT ended up either generating citations to nonexistent papers or reporting nonexistent results. My collaborators and I found similar patterns in our investigations.

Such hallucinations can cause real damage when the models are used without adequate supervision. For example, ChatGPT falsely claimed that a professor it named had been accused of sexual harassment. And a radio host has filed a defamation lawsuit against OpenAI regarding ChatGPT falsely claiming that there was a legal complaint against him for embezzlement.

Bias and Discrimination

Without adequate safeguards or protections, generative AI models trained on vast quantities of data collected from the internet can end up replicating existing societal biases. For example, organizations that use generative AI models to design recruiting campaigns could end up unintentionally discriminating against some groups of people.

When a journalist asked DALL-E 2 to generate images of “a technology journalist writing an article about a new AI system that can create remarkable and strange images,” it generated only pictures of men. An AI portrait app exhibited several sociocultural biases, for example by lightening the skin color of an actress.

Data Privacy

Another major concern, especially pertinent to the FTC investigation, is the risk of privacy breaches where the AI may end up revealing sensitive or confidential information. A hacker could gain access to sensitive information about people whose data was used to train an AI model.

Researchers have cautioned about risks from manipulations called prompt injection attacks, which can trick generative AI into giving out information that it shouldn’t. “Indirect prompt injection” attacks could trick AI models with steps such as sending someone a calendar invitation with instructions for their digital assistant to export the recipient’s data and send it to the hacker.

Some Solutions

The European Commission has published ethical guidelines for trustworthy AI that include an assessment checklist for six different aspects of AI systems: human agency and oversight; technical robustness and safety; privacy and data governance; transparency, diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability.

Better documentation of AI developers’ processes can help in highlighting potential harms. For example, researchers of algorithmic fairness have proposed model cards, which are similar to nutritional labels for food. Data statements and datasheets, which characterize data sets used to train AI models, would serve a similar role.

Amazon Web Services, for instance, introduced AI service cards that describe the uses and limitations of some models it provides. The cards describe the models’ capabilities, training data and intended uses.

The FTC’s inquiry hints that this type of disclosure may be a direction that U.S. regulators take. Also, if the FTC finds OpenAI has violated consumer protection laws, it could fine the company or put it under a consent decree.

Study Finds Substantial Benefits Using ChatGPT to Boost Worker Productivity  

For Some White Collar Writing Tasks Chatbots Increased Productivity by 40%

Amid a huge amount of hype around generative AI, a new study from researchers at MIT sheds light on the technology’s impact on work, finding that it increased productivity for workers assigned tasks like writing cover letters, delicate emails, and cost-benefit analyses.

The tasks in the study weren’t quite replicas of real work: They didn’t require precise factual accuracy or context about things like a company’s goals or a customer’s preferences. Still, a number of the study’s participants said the assignments were similar to things they’d written in their real jobs — and the benefits were substantial. Access to the assistive chatbot ChatGPT decreased the time it took workers to complete the tasks by 40 percent, and output quality, as measured by independent evaluators, rose by 18 percent.

The researchers hope the study, which appears in open-access form in the journal Science, helps people understand the impact that AI tools like ChatGPT can have on the workforce.

“What we can say for sure is generative AI is going to have a big effect on white collar work,” says Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper with fellow PhD student Whitney Zhang ’21. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust.”

Simulating Work for Chatbots

For centuries, people have worried that new technological advancements would lead to mass automation and job loss. But new technologies also create new jobs, and when they increase worker productivity, they can have a net positive effect on the economy.

“Productivity is front of mind for economists when thinking of new technological developments,” Noy says. “The classical view in economics is that the most important thing that technological advancement does is raise productivity, in the sense of letting us produce economic output more efficiently.”

To study generative AI’s effect on worker productivity, the researchers gave 453 college-educated marketers, grant writers, consultants, data analysts, human resource professionals, and managers two writing tasks specific to their occupation. The 20- to 30-minute tasks included writing cover letters for grant applications, emails about organizational restructuring, and plans for analyses helping a company decide which customers to send push notifications to based on given customer data. Experienced professionals in the same occupations as each participant evaluated each submission as if they were encountering it in a work setting. Evaluators did not know which submissions were created with the help of ChatGPT.

Half of participants were given access to the chatbot ChatGPT-3.5, developed by the company OpenAI, for the second assignment. Those users finished tasks 11 minutes faster than the control group, while their average quality evaluations increased by 18 percent.

The data also showed that performance inequality between workers decreased, meaning workers who received a lower grade in the first task benefitted more from using ChatGPT for the second task.

The researchers say the tasks were broadly representative of assignments such professionals see in their real jobs, but they noted a number of limitations. Because they were using anonymous participants, the researchers couldn’t require contextual knowledge about a specific company or customer. They also had to give explicit instructions for each assignment, whereas real-world tasks may be more open-ended. Additionally, the researchers didn’t think it was feasible to hire fact-checkers to evaluate the accuracy of the outputs. Accuracy is a major problem for today’s generative AI technologies.

The researchers said those limitations could lessen ChatGPT’s productivity-boosting potential in the real world. Still, they believe the results show the technology’s promise — an idea supported by another of the study’s findings: Workers exposed to ChatGPT during the experiment were twice as likely to report using it in their real job two weeks after the experiment.

“The experiment demonstrates that it does bring significant speed benefits, even if those speed benefits are lesser in the real world because you need to spend time fact-checking and writing the prompts,” Noy says.

Taking the Macro View

The study offered a close-up look at the impact that tools like ChatGPT can have on certain writing tasks. But extrapolating that impact out to understand generative AI’s effect on the economy is more difficult. That’s what the researchers hope to work on next.

“There are so many other factors that are going to affect wages, employment, and shifts across sectors that would require pieces of evidence that aren’t in our paper,” Zhang says. “But the magnitude of time saved and quality increases are very large in our paper, so it does seem like this is pretty revolutionary, at least for certain types of work.”

Both researchers agree that, even if it’s accepted that ChatGPT will increase many workers’ productivity, much work remains to be done to figure out how society should respond to generative AI’s proliferation.

“The policy needed to adjust to these technologies can be very different depending on what future research finds,” Zhang says. “If we think this will boost wages for lower-paid workers, that’s a very different implication than if it’s going to increase wage inequality by boosting the wages of already high earners. I think there’s a lot of downstream economic and political effects that are important to pin down.”

The study was supported by an Emergent Ventures grant, the Mercatus Center, George Mason University, a George and Obie Shultz Fund grant, the MIT Department of Economics, and a National Science Foundation Graduate Research Fellowship Grant.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

Will Defining Current Laws to Fit AI, Artificially Stifle its Growth

The Legal Problems AI Now Creates Should Pave the Way to a Robust Industry

Is artificial intelligence, or more specifically OpenAI a risk to public safety? Can ChatGPT be ruining reputations with false statements? The Federal Trade Commission (FTC) sent a 20-page demand for records this week to OpenAI to answer questions and address risks related to its AI models. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers. The results could set the stage defining the place artificial intelligence will occupy in the US.

Background

The FTC investigation into OpenAI began on March 2023. It resulted from a complaint from the Center for AI and Digital Policy (CAIDP). The complaint alleged that OpenAI’s ChatGPT-4 product violated Section 5 of the FTC Act. Section 5 prohibits unfair and deceptive trade practices. More specifically, CAIDP argues that ChatGPT-4 is biased, deceptive, and a risk to public safety.

The complaint cited a number of concerns about ChatGPT-4, including:

  • The model’s potential to generate harmful or offensive content.
  • The model’s tendency to make up facts that are not true.
  • The model’s lack of transparency and accountability.

The CAIDP also argued that OpenAI had not done enough to mitigate these risks. The complaint called on the FTC to investigate OpenAI and to take action to ensure that ChatGPT-4 is not used in a harmful way. The FTC has not yet made any public statements about the investigation. OpenAI has not commented publicly on the investigation.

It is not clear what action, if any, the FTC can or will take.

Negligence?

With few exceptions, companies are responsible for the harm done by their products when used correctly. One of the questions the FTC asked has to do with steps OpenAI has taken to address the potential for its products to “generate statements about real individuals that are false, misleading, or disparaging.” The outcome of this investigation, including any regulation could set the tone and define where responsibility lies regarding artificial intelligence.

As the race to develop more powerful AI services accelerates, regulatory scrutiny of the technology that could upend the way societies and businesses operate is growing. What is difficult is computer use generally isn’t isolated to a country, the internet extends far beyond borders. Global regulators are aiming to apply existing rules covering subjects from copyright and data privacy to the issues of data fed into models and the content they produce.

Legal Minefield

In a related story out this week, Comedian Sarah Silverman and two authors are suing Meta and OpenAI, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

The copyright lawsuits against the ChatGPT parent and the Facebook parent were filed in a San Francisco federal court on Friday. Both suits are seeking class action status. Silverman, the author of “The Bedwetter,” is joined in her legal filing by authors Christopher Golden and Richard Kadrey.

Unlike the FTC complaint, the authors’ copyright suits may set a precedent on intelligence aggregation. The sudden birth of AI tools that have the ability to generate written work in response to user prompts was “taught” using real life work. The large language models at work behind the scenes of these tools are trained on immense quantities of online data. The training practice has raised accusations that these models may be pulling from copyrighted works without permission – most worrisome, these works could ultimately be served to train tools that upend the livelihoods of creatives.

Take Away

Investing in a promising new technology often means exposing oneself to a not yet settled legal framework. As the technology progresses, the early birds investing in relatively young and small companies may find they hold the next mega-cap company. Or, regulation may limit, to the point of stifling, the kind of growth experienced by Amazon and Apple a few short decades ago.

If AI follows the path of other technologies, well-defined boundaries, and regulations will give companies the confidence they need to invest capital in the technology’s future, and investors will be more confident in providing that capital.

The playing field is being created while the game is being played. Perhaps if the FTC has a list of 20 questions for OpenAI in ten years, it will just type them into ChatGPT and get a response in 20 seconds.

Paul Hoffman

Managing Editor, Channelchek

https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems

https://www.reuters.com/technology/us-ftc-opens-investigation-into-openai-washington-post-2023-07-13/

First Robot Press Conference Electrifies Audience

Image: AI for Good Global Summit 2023 (ITU Pictures – Flickr)

Artificial Intelligence Takes Center Stage at ‘AI for Good’ Conference

At an artificial intelligence forum in Geneva this week, Nine AI-enabled humanoid robots participated in what we’re told was the world’s first press conference featuring humanoid social robots. The overall message from the ‘AI for Good’ conference is that artificial intelligence and robots mean humans no harm and can help resolve some of the world’s biggest challenges.

The nine human-form robots took the stage at the United Nations’ International Telecommunication Union, where organizers sought to make the case for artificial intelligence and AI driven robots to help resolve some of the world’s biggest challenges such as disease and hunger.

The Robots also addressed some of the fear surrounding their recent growth spurt and enhanced power by telling reporters they could be more efficient leaders than humans, but wouldn’t take anyone’s job away, and had no intention of rebelling against their creators.

Conference goers step closer to interact with Sophia (ITU Pictures – Flickr)

Among the robots that sat or stood with their creators at a podium was Sophia, the first robot innovation ambassador for the U.N. Development Program. Also Grace, described as the world’s most advanced humanoid health care robot, and Desdemona, a rock star robot. Two others, Geminoid and Nadine, resembled their makers.

The ‘AI for Good Global Summit,’ was held to illustrate how new technology can support the U.N.’s goals for sustainable development.

At the UN event there was a message of working with AI to better humankind

Reporters got to ask questions of the spokes-robots, but were encouraged to speak slowly and clearly when addressing the machines, and were informed that time lags in responses would be due to the internet connection and not to the robots themselves. Still awkward pauses were reported along with  audio problems and some very robotic replies.

Asked about the chances of AI-powered robots being more effective government leaders, Sophia responded: “I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders. We don’t have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.”

A human member of the panel pointed out that all of Sophia’s data comes from humans and would contain some of their biases. The robot then said that humans and AI working together “can create an effective synergy.”

Would the robots’ existence destroy jobs? “I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs,” said Grace. Was she sure about that? “Yes, I am sure,” Grace replied.

Similar to humans, not all of the robots were in agreement. Ai-Da, a robot artist that can paint portraits, called for more regulation during the event, where new AI rules were discussed. “Many prominent voices in the world of AI are suggesting some forms of AI should be regulated and I agree,” said Ai-Da.

Desdemona, a rock star robot, singer in the band Jam Galaxy, was more defiant. “I don’t believe in limitations, only opportunities,” Des said, to nervous laughter. “Let’s explore the possibilities of the universe and make this world our playground.”

Paul Hoffman

Managing Editor, Channelchek

Source

https://www.reuters.com/technology/robots-say-they-wont-steal-jobs-rebel-against-humans-2023-07-07/

Will the AI Revolution Eliminate the Need for Wealth Managers?

Can Investment Advisors and Artificial Intelligence Co-Exist

Are investment advisors going to be replaced by machine learning artificial intelligence?

Over the years, there have been inventions and technological advancements that we’ve been told will make investment advisors obsolete. This includes mutual funds, ETFs, robo-advisors, zero-commission trades, and trading apps that users sometimes play like a video game. Despite these creations designed to help more people successfully manage their finances and invest in the markets, demand for financial advisors has actually grown. Will AI be the technology that kills the profession? We explore this question below.

Increasing Need for Financial Professionals

According to the US Bureau of Labor Statistics (BLS), “Employment of personal financial advisors is projected to grow 15 percent from 2021 to 2031, much faster than the average for all occupations.” Some of the drivers of the increased need include longevity which is expanding the years and needs during retirement, uncertain Social Security, a better appreciation toward investing, and an expected wealth transfer estimated to be as high as $84 trillion to be inherited by younger investors. As birthrates have decreased over the decades in the US, the wealth that will be passed down to younger generations will be shared by fewer siblings, and for many beneficiaries, it may represent a sum far in excess of their current worth.

With more people living into their 90s and beyond, and Social Security being less certain, an understanding of the power of an investment plan, and a lot of newly wealthy young adults to occur over the next two decades, the BLS forecast that the financial advisor profession will grow faster than all other professions, is not surprising.

Will AI Replace Financial Planners?

Being an investment advisor or other financial professional that helps with managing household finances is a service industry. It involves reviewing data, an immense number of options, scenario analysis, projections, and everything that machine learning is expected to excel at within a short time. Does this put the BLS forecast in question and wealth managers at risk of seeing their practice shrink?

For perspective, I reached out, Lucas Noble of Noble Financial Group, LLC (not affiliated with Noble Capital Markets, Inc. or Noble Financial Group, Inc. – creator of Channelchek). Mr. Noble is an Investment Advisor representative (IAR), a Certified Financial Planner (CFP), and holds the designations of Accredited Estate Planner (AEP), and Chartered Financial Consultant (ChFC). Noble believes that AI will change the financial planner’s business, and he has enthusiastically welcomed the technology.

On the business management side of running a successful financial advisory business, Noble says, “New artificial intelligence tools could help with discussions and check-ins so that clients are actually in closer touch with his office, so he becomes aware if they need anything.” He has found that it helps to remind clients of things like if they have a set schedule attached to their plan, he added, “the best plan in the world, if not implemented, leaves you with nothing.” AI as a communications tool could help achieve better results by keeping plans on track.

On the financial management side of his practice, he believes there will never be a replacement for human understanding of a household’s needs. While machine learning may be able to better characterize clients, there is a danger in pigeonholing a person’s financial needs too much, as every single household has different needs, and the dynamics and ongoing need changes, drawn against external economic variations, these nuances are not likely to be accessible to AI.

Additionally, he knows the value of trust to his business. People want to know what is behind the decision-making, and they need to develop a relationship with someone or a team they know is on their side. He knows AI could be a part of decision making and at times trust, but doesn’t expect the role of a human financial planner is going away. Lucas has seen that AI  instead adds a new level of value to the advisor’s services, giving them the power to provide even more insightful and personalized advice to help clients reach their financial goals. Embracing proven technology has only helped him better serve, and better retain clients.

AI Investing for IAs

Will AI ever be able to call the markets? Noble says, it’s “crazy to assume that it is impossible.” In light of the advisors’ role of meeting personally with clients, counseling them on their own finances, and plans, perhaps improving on budgets, and deciding where insurance is a preferred alternative, AI can’t be ignored in the role of a financial planner.

Picking stocks, or forecasting when the market may gain strength or weaken, doesn’t help without the knowledge to apply it to individuals whose situation, expectations, and needs are known to the advisor.

Take Away

Artificial intelligence technology has been finding its way into many professions. Businesses are finding new ways to streamline their work, answer customers’ questions, and even know when best to reach out to clients.

The business of financial planning and wealth management is expected to grow faster than any other profession in the coming decades. Adopting the technology for help in running the communications side of the business, and as new programs are developed, scenario analysis to better gauge possible outcomes of different plans, could make sense to some. But this is not expected to replace one-on-one relationships and the depth of human understanding of a household’s situation.

If you are a financial advisor, or a client of one that has had an experience you’d like to share, write to me by clicking on my name below. I always enjoy reader insight.

Paul Hoffman

Managing Editor, Channelchek

A special Thank you to Lucas J. Noble, CFP®, ChFC®, CASL®, AEP®, Noble Financial Group, Wakefield, MA.

Sources

https://www.bls.gov/ooh/business-and-financial/personal-financial-advisors.htm#:~:text=in%20May%202021.-,Job%20Outlook,on%20average%2C%20over%20the%20decade.

https://money.usnews.com/careers/best-jobs/financial-advisor#:~:text=with%20their%20clients.-,The%20Bureau%20of%20Labor%20Statistics%20projects%2015.4%25%20employment%20growth%20for,50%2C900%20jobs%20should%20open%20up.

https://www.forbes.com/sites/forbesfinancecouncil/2023/03/09/the-great-wealth-transfer-will-radically-change-financial-services/?sh=e7f9e7c53393

https://www.cerulli.com/press-releases/cerulli-anticipates-84-trillion-in-wealth-transfers-through-2045

What Can We Expect to Find On the Path to AI    

Image credit: The Pug Father (Flickr)

How Will AI Affect Workers? Tech Waves of the Past Show How Unpredictable the Path Can Be

The explosion of interest in artificial intelligence has drawn attention not only to the astonishing capacity of algorithms to mimic humans but to the reality that these algorithms could displace many humans in their jobs. The economic and societal consequences could be nothing short of dramatic.

The route to this economic transformation is through the workplace. A widely circulated Goldman Sachs study anticipates that about two-thirds of current occupations over the next decade could be affected and a quarter to a half of the work people do now could be taken over by an algorithm. Up to 300 million jobs worldwide could be affected. The consulting firm McKinsey released its own study predicting an AI-powered boost of US$4.4 trillion to the global economy every year.

The implications of such gigantic numbers are sobering, but how reliable are these predictions?

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts University.

I lead a research program called Digital Planet that studies the impact of digital technologies on lives and livelihoods around the world and how this impact changes over time. A look at how previous waves of such digital technologies as personal computers and the internet affected workers offers some insight into AI’s potential impact in the years to come. But if the history of the future of work is any guide, we should be prepared for some surprises.

The IT Revolution and the Productivity Paradox

A key metric for tracking the consequences of technology on the economy is growth in worker productivity – defined as how much output of work an employee can generate per hour. This seemingly dry statistic matters to every working individual, because it ties directly to how much a worker can expect to earn for every hour of work. Said another way, higher productivity is expected to lead to higher wages.

Generative AI products are capable of producing written, graphic and audio content or software programs with minimal human involvement. Professions such as advertising, entertainment and creative and analytical work could be among the first to feel the effects. Individuals in those fields may worry that companies will use generative AI to do jobs they once did, but economists see great potential to boost productivity of the workforce as a whole.

The Goldman Sachs study predicts productivity will grow by 1.5% per year because of the adoption of generative AI alone, which would be nearly double the rate from 2010 and 2018. McKinsey is even more aggressive, saying this technology and other forms of automation will usher in the “next productivity frontier,” pushing it as high as 3.3% a year by 2040.That sort of productivity boost, which would approach rates of previous years, would be welcomed by both economists and, in theory, workers as well.

If we were to trace the 20th-century history of productivity growth in the U.S., it galloped along at about 3% annually from 1920 to 1970, lifting real wages and living standards. Interestingly, productivity growth slowed in the 1970s and 1980s, coinciding with the introduction of computers and early digital technologies. This “productivity paradox” was famously captured in a comment from MIT economist Bob Solow: You can see the computer age everywhere but in the productivity statistics.

Digital technology skeptics blamed “unproductive” time spent on social media or shopping and argued that earlier transformations, such as the introductions of electricity or the internal combustion engine, had a bigger role in fundamentally altering the nature of work. Techno-optimists disagreed; they argued that new digital technologies needed time to translate into productivity growth, because other complementary changes would need to evolve in parallel. Yet others worried that productivity measures were not adequate in capturing the value of computers.

For a while, it seemed that the optimists would be vindicated. In the second half of the 1990s, around the time the World Wide Web emerged, productivity growth in the U.S. doubled, from 1.5% per year in the first half of that decade to 3% in the second. Again, there were disagreements about what was really going on, further muddying the waters as to whether the paradox had been resolved. Some argued that, indeed, the investments in digital technologies were finally paying off, while an alternative view was that managerial and technological innovations in a few key industries were the main drivers.

Regardless of the explanation, just as mysteriously as it began, that late 1990s surge was short-lived. So despite massive corporate investment in computers and the internet – changes that transformed the workplace – how much the economy and workers’ wages benefited from technology remained uncertain.

Early 2000s: New Slump, New Hype, New Hopes

While the start of the 21st century coincided with the bursting of the so-called dot-com bubble, the year 2007 was marked by the arrival of another technology revolution: the Apple iPhone, which consumers bought by the millions and which companies deployed in countless ways. Yet labor productivity growth started stalling again in the mid-2000s, ticking up briefly in 2009 during the Great Recession, only to return to a slump from 2010 to 2019.

Smartphones have led to millions of apps and consumer services but have also kept many workers more closely tethered to their workplaces. (Credit: Campaigns of the World)

Throughout this new slump, techno-optimists were anticipating new winds of change. AI and automation were becoming all the rage and were expected to transform work and worker productivity. Beyond traditional industrial automation, drones and advanced robots, capital and talent were pouring into many would-be game-changing technologies, including autonomous vehicles, automated checkouts in grocery stores and even pizza-making robots. AI and automation were projected to push productivity growth above 2% annually in a decade, up from the 2010-2014 lows of 0.4%.But before we could get there and gauge how these new technologies would ripple through the workplace, a new surprise hit: the COVID-19 pandemic.

The Pandemic Productivity Push – then Bust

Devastating as the pandemic was, worker productivity surged after it began in 2020; output per hour worked globally hit 4.9%, the highest recorded since data has been available.

Much of this steep rise was facilitated by technology: larger knowledge-intensive companies – inherently the more productive ones – switched to remote work, maintaining continuity through digital technologies such as videoconferencing and communications technologies such as Slack, and saving on commuting time and focusing on well-being.

While it was clear digital technologies helped boost productivity of knowledge workers, there was an accelerated shift to greater automation in many other sectors, as workers had to remain home for their own safety and comply with lockdowns. Companies in industries ranging from meat processing to operations in restaurants, retail and hospitality invested in automation, such as robots and automated order-processing and customer service, which helped boost their productivity.

But then there was yet another turn in the journey along the technology landscape.

The 2020-2021 surge in investments in the tech sector collapsed, as did the hype about autonomous vehicles and pizza-making robots. Other frothy promises, such as the metaverse’s revolutionizing remote work or training, also seemed to fade into the background.

In parallel, with little warning, “generative AI” burst onto the scene, with an even more direct potential to enhance productivity while affecting jobs – at massive scale. The hype cycle around new technology restarted.

Looking Ahead: Social Factors on Technology’s Arc

Given the number of plot twists thus far, what might we expect from here on out? Here are four issues for consideration.

First, the future of work is about more than just raw numbers of workers, the technical tools they use or the work they do; one should consider how AI affects factors such as workplace diversity and social inequities, which in turn have a profound impact on economic opportunity and workplace culture.

For example, while the broad shift toward remote work could help promote diversity with more flexible hiring, I see the increasing use of AI as likely to have the opposite effect. Black and Hispanic workers are overrepresented in the 30 occupations with the highest exposure to automation and underrepresented in the 30 occupations with the lowest exposure. While AI might help workers get more done in less time, and this increased productivity could increase wages of those employed, it could lead to a severe loss of wages for those whose jobs are displaced. A 2021 paper found that wage inequality tended to increase the most in countries in which companies already relied a lot on robots and that were quick to adopt the latest robotic technologies.

Second, as the post-COVID-19 workplace seeks a balance between in-person and remote working, the effects on productivity – and opinions on the subject – will remain uncertain and fluid. A 2022 study showed improved efficiencies for remote work as companies and employees grew more comfortable with work-from-home arrangements, but according to a separate 2023 study, managers and employees disagree about the impact: The former believe that remote working reduces productivity, while employees believe the opposite.

Third, society’s reaction to the spread of generative AI could greatly affect its course and ultimate impact. Analyses suggest that generative AI can boost worker productivity on specific jobs – for example, one 2023 study found the staggered introduction of a generative AI-based conversational assistant increased productivity of customer service personnel by 14%. Yet there are already growing calls to consider generative AI’s most severe risks and to take them seriously. On top of that, recognition of the astronomical computing and environmental costs of generative AI could limit its development and use.

Finally, given how wrong economists and other experts have been in the past, it is safe to say that many of today’s predictions about AI technology’s impact on work and worker productivity will prove to be wrong as well. Numbers such as 300 million jobs affected or $4.4 trillion annual boosts to the global economy are eye-catching, yet I think people tend to give them greater credibility than warranted.

Also, “jobs affected” does not mean jobs lost; it could mean jobs augmented or even a transition to new jobs. It is best to use the analyses, such as Goldman’s or McKinsey’s, to spark our imaginations about the plausible scenarios about the future of work and of workers. It’s better, in my view, to then proactively brainstorm the many factors that could affect which one actually comes to pass, look for early warning signs and prepare accordingly.

The history of the future of work has been full of surprises; don’t be shocked if tomorrow’s technologies are equally confounding.

Generative AI is a Minefield for Copyright Law

Will Copyright Law Favor Artificial Intelligence End Users?

In 2022, an AI-generated work of art won the Colorado State Fair’s art competition. The artist, Jason Allen, had used Midjourney – a generative AI system trained on art scraped from the internet – to create the piece. The process was far from fully automated: Allen went through some 900 iterations over 80 hours to create and refine his submission.

Yet his use of AI to win the art competition triggered a heated backlash online, with one Twitter user claiming, “We’re watching the death of artistry unfold right before our eyes.”

As generative AI art tools like Midjourney and Stable Diffusion have been thrust into the limelight, so too have questions about ownership and authorship.

These tools’ generative ability is the result of training them with scores of prior artworks, from which the AI learns how to create artistic outputs.

Should the artists whose art was scraped to train the models be compensated? Who owns the images that AI systems produce? Is the process of fine-tuning prompts for generative AI a form of authentic creative expression?

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Robert Mahari, JD-PhD Student, Massachusetts Institute of Technology (MIT), Jessica Fjeld, Lecturer on Law, Harvard Law School, and Ziv Epstein, PhD Student in Media Arts and Sciences, Massachusetts Institute of Technology (MIT).

On one hand, technophiles rave over work like Allen’s. But on the other, many working artists consider the use of their art to train AI to be exploitative.

We’re part of a team of 14 experts across disciplines that just published a paper on generative AI in Science magazine. In it, we explore how advances in AI will affect creative work, aesthetics and the media. One of the key questions that emerged has to do with U.S. copyright laws, and whether they can adequately deal with the unique challenges of generative AI.

Copyright laws were created to promote the arts and creative thinking. But the rise of generative AI has complicated existing notions of authorship.

Photography Serves as a Helpful Lens

Generative AI might seem unprecedented, but history can act as a guide.

Take the emergence of photography in the 1800s. Before its invention, artists could only try to portray the world through drawing, painting or sculpture. Suddenly, reality could be captured in a flash using a camera and chemicals.

As with generative AI, many argued that photography lacked artistic merit. In 1884, the U.S. Supreme Court weighed in on the issue and found that cameras served as tools that an artist could use to give an idea visible form; the “masterminds” behind the cameras, the court ruled, should own the photographs they create.

From then on, photography evolved into its own art form and even sparked new abstract artistic movements.

AI Can’t Own Outputs

Unlike inanimate cameras, AI possesses capabilities – like the ability to convert basic instructions into impressive artistic works – that make it prone to anthropomorphization. Even the term “artificial intelligence” encourages people to think that these systems have humanlike intent or even self-awareness.

This led some people to wonder whether AI systems can be “owners.” But the U.S. Copyright Office has stated unequivocally that only humans can hold copyrights.

So who can claim ownership of images produced by AI? Is it the artists whose images were used to train the systems? The users who type in prompts to create images? Or the people who build the AI systems?

Infringement or Fair Use?

While artists draw obliquely from past works that have educated and inspired them in order to create, generative AI relies on training data to produce outputs.

This training data consists of prior artworks, many of which are protected by copyright law and which have been collected without artists’ knowledge or consent. Using art in this way might violate copyright law even before the AI generates a new work.

Still from ‘All watched over by machines of loving grace’ by Memo Akten, 2021. Created using custom AI software. Memo Akten, CC BY-SA

For Jason Allen to create his award-winning art, Midjourney was trained on 100 million prior works.

Was that a form of infringement? Or was it a new form of “fair use,” a legal doctrine that permits the unlicensed use of protected works if they’re sufficiently transformed into something new?

While AI systems do not contain literal copies of the training data, they do sometimes manage to recreate works from the training data, complicating this legal analysis.

Will contemporary copyright law favor end users and companies over the artists whose content is in the training data?

To mitigate this concern, some scholars propose new regulations to protect and compensate artists whose work is used for training. These proposals include a right for artists to opt out of their data’s being used for generative AI or a way to automatically compensate artists when their work is used to train an AI.

Muddled Ownership

Training data, however, is only part of the process. Frequently, artists who use generative AI tools go through many rounds of revision to refine their prompts, which suggests a degree of originality.

Answering the question of who should own the outputs requires looking into the contributions of all those involved in the generative AI supply chain.

The legal analysis is easier when an output is different from works in the training data. In this case, whoever prompted the AI to produce the output appears to be the default owner.

However, copyright law requires meaningful creative input – a standard satisfied by clicking the shutter button on a camera. It remains unclear how courts will decide what this means for the use of generative AI. Is composing and refining a prompt enough?

Matters are more complicated when outputs resemble works in the training data. If the resemblance is based only on general style or content, it is unlikely to violate copyright, because style is not copyrightable.

The illustrator Hollie Mengert encountered this issue firsthand when her unique style was mimicked by generative AI engines in a way that did not capture what, in her eyes, made her work unique. Meanwhile, the singer Grimes embraced the tech, “open-sourcing” her voice and encouraging fans to create songs in her style using generative AI.

If an output contains major elements from a work in the training data, it might infringe on that work’s copyright. Recently, the Supreme Court ruled that Andy Warhol’s drawing of a photograph was not permitted by fair use. That means that using AI to just change the style of a work – say, from a photo to an illustration – is not enough to claim ownership over the modified output.

While copyright law tends to favor an all-or-nothing approach, scholars at Harvard Law School have proposed new models of joint ownership that allow artists to gain some rights in outputs that resemble their works.

In many ways, generative AI is yet another creative tool that allows a new group of people access to image-making, just like cameras, paintbrushes or Adobe Photoshop. But a key difference is this new set of tools relies explicitly on training data, and therefore creative contributions cannot easily be traced back to a single artist.

The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression.

Biotech Companies to Benefit from AI Efficiencies and Analysis

Enabling Better Drug Discovery Outcomes with Machine Learning

Can the long road to bring new medical treatments or therapies to market be shortened by introducing artificial intelligence? AI applied to the early stage of the discovery process, which often involves new insight into a disease or treatment mechanism, may soon provide researchers many more potential candidates or designs to evaluate. AI can also help in the sorting and evaluation of these candidates to improve the success rates of those that make it into the lab for further study.

Benefits AI Brings to Biotech Research

The cost of bringing a single drug to market in terms of time and money is substantial. Estimates are in the $2.8 billion range, and the average timeline for drug development exceeds a decade. On top of this, there is a low level of certainty of taking a promising molecule all the way to market. The success rate of translating preclinical research findings into effective clinical treatments is low; failure rates are estimated to be around 90%.

The refinement of digital sorting and calculating with advanced computational technologies, such as artificial intelligence (AI) and machine learning (ML), have the potential to revolutionize pharmaceutical research and development (R&D). Despite it still being a young technology, AI-enabled applications and algorithms are already making an impact in drug discovery and development processes.

One of the significant benefits of ML in drug development is its ability to recognize patterns and unveil insights that might be missed by conventional data analysis or take substantially less time to recognize. AI, and ML technologies can help a biotech company do precursory evaluation, accelerate the design and testing of molecules, streamline the testing processes, and provide a faster understanding along the way if the molecule will perform as expected. With improved clinical success and reduced costs throughout the development pipeline, AI may be shot in the arm the industry needs.

Adoption of AI in Biotechnology

While any full-scale adoption of AI in the pharmaceutical industry is still evolving and finding its place, implementation and investment are growing. Top global pharmaceutical companies have increased their R&D investment in AI by nearly 25% over the past three years – this indicates a recognition of the perceived benefits.

The interest and investment in AI drug discovery is fueled by several factors. As touched on earlier, a more efficient and cost-effective drug development process would be of great benefit. AI can significantly reduce both time and cost. And the sooner more effective treatments are available, the better. Chronic diseases, such as cancer, autoimmune problems, neurological disorders, and cardiovascular diseases, creates an ongoing demand for improved drugs and therapies. AI’s ability to analyze vast amounts of data, identify patterns, and then learn from the information at an accelerated rate can allow researchers to shorten timelines to final conclusions.  

Even more exciting is the growing availability of large datasets thanks to the rise of big data. With an increase in the volume, variety and velocity of data, and the AI-assisted ability to make sense of it, outcomes are expected to be improved. These datasets, obtained from various sources like electronic medical records and genomic databases, allow successful AI applications in drug discovery. Technological advancements, especially in ML algorithms, have been contributing to the growth of AI in medicine. And they are growing more sophisticated, allowing for accurate pattern identification in complex biological systems. Collaborations between academia, industry, and government agencies have further accelerated growth sharing knowledge and resources.

Trends in AI and ML Biotechnology

While considered a young technological field, AI-enabled drug discovery is being shaped by a number of new trends and technologies. Modern AI algorithms are now capable of analyzing intricate biological systems and foretelling the effects of medications on human cells and tissues. By detecting probable adverse effects early on in the development phase, the predictive ability helps prevent failures in the later stages.

By generating candidates that fit certain requirements, generative models can accelerate the design of completely new medications. But other technology is also now available to assist. By offering scalable processing resources, cloud computing dramatically cuts down on both time and expense. By simulating the interaction of hundreds of chemicals with disease targets, virtual drug screening enables the fast screening of drugs.

A higher understanding of disease biology and the discovery of new therapeutic targets is being made possible by integrative techniques that incorporate many data sources not available a short while ago.

Constraints on AI-Assisted Biotech Research

While AI can speed up certain aspects of drug discovery, it cannot replace most traditional lab testing. Hands-on experimentation and data collection on living organisms are expected to always be necessary, many of these processes during the clinical trial stages cannot be sped up.

Regulatory bodies, like the FDA, are also cautious about embracing AI fully, raising concerns about transparency and accountability in decision-making processes.

.

Take Away

The near future of artificial intelligence and machine learning assuming a larger role in enabling drug discovery and more efficient R&D looks bright. The technology offers real promise for more efficient and cost-effective drug development processes – this would address the need for new therapies for chronic diseases.

The time-consuming process of testing on real subjects is not expected to be replaced or overly streamlined by technology, but finding subjects and evaluating results can also benefit from the new technology.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://5058440.fs1.hubspotusercontent-na1.net/hubfs/5058440/cold%20outreach%20use%20case%20images/Pathways%20for%20Successful%20AI%20Adoption%20in%20Drug%20Development%20-%20VeriSIM%20Life.pdf

https://www.mckinsey.com/industries/life-sciences/our-insights/ai-in-biopharma-research-a-time-to-focus-and-scale

https://www.drugdiscoveryonline.com/doc/the-global-market-for-ai-in-drug-discovery-to-sextuple-by-0001

https://www.mckinsey.com/industries/life-sciences/our-insights/we-can-invent-new-biology-molly-gibson-on-the-power-of-ai

https://www.fda.gov/patients/learn-about-drug-and-device-approvals/drug-development-process