BigBear.ai Makes Bold Move to Lead Vision AI Industry with Acquisition of Pangiam

BigBear.ai, a provider of AI-powered business intelligence solutions, has announced the acquisition of Pangiam, a leader in facial recognition and biometrics, for approximately $70 million in an all-stock deal. The acquisition represents a major strategic move by BigBear.ai to expand its capabilities and leadership in vision artificial intelligence (AI).

Vision AI refers to AI systems that can perceive, understand and interact with the visual world. It includes capabilities like image and video analysis, facial recognition, and other computer vision applications. Vision AI is considered one of the most promising and rapidly growing AI segments.

With the acquisition, BigBear.ai makes a big bet on vision AI and aims to create one of the industry’s most comprehensive vision AI portfolios. Pangiam’s facial recognition and biometrics technologies will complement BigBear.ai’s existing computer vision capabilities.

Major Boost to Government Business

A key rationale and benefit of the deal is expanding BigBear.ai’s business with U.S. government defense and intelligence agencies. The company currently serves 20 government customers with its predictive analytics solutions. Adding Pangiam’s technology and expertise will open significant new opportunities.

Pangiam brings an impressive customer base that includes the Department of Homeland Security, U.S. Customs and Border Protection, and major international airports. Its vision AI analytics help these customers streamline operations and enhance security.

According to Mandy Long, BigBear.ai CEO, the combined entity will be able to “pursue larger customer opportunities” in the government sector. Leveraging Pangiam’s portfolio is expected to result in larger contracts for expanded vision AI services.

CombiningComplementary Vision AI Technologies

Technologically, the acquisition enables BigBear.ai to provide comprehensive vision AI solutions. Pangiam’s strength lies in near-field applications like facial recognition and biometrics. BigBear.ai has capabilities in far-field vision AI that analyzes wider environments.

Together, the combined portfolio covers the full spectrum of vision AI’s possibilities. BigBear.ai notes this full stack capability will be unique in the industry, giving the company an edge over other players.

The vision AI integration also unlocks new potential for BigBear.ai’s existing government customers. Its current predictive analytics solutions can be augmented with Pangiam’s facial recognition and biometrics tools. This builds on the company’s strategy to cross-sell new capabilities to established customers.

Long describes the alignment of Pangiam and BigBear.ai’s vision AI prowess as a key factor that will “vault solutions currently available in market.” The combined innovation assets create opportunities to push vision AI technology forward and build next-generation solutions.

Fast-Growing Market Opportunities

The acquisition comes as vision AI represents a $20 billion market opportunity predicted to grow at over 20% CAGR through 2030. It is one of the most dynamic segments within the booming AI industry.

With Pangiam under its wing, BigBear.ai is making a major play for leadership in this high-potential space. The new capabilities and customer reach significantly expand its addressable market in areas like government, airports, identity verification, and border security.

BigBear.ai also gains vital talent and IP to enhance its vision AI research and development efforts. This will help fuel its ability to bring new innovations to customers seeking advanced vision AI systems.

In a statement, BigBear.ai CEO Mandy Long called the merger a “holy grail” deal that delivers full spectrum vision AI capabilities spanning near and far field environments. It positions the newly combined company to capitalize on surging market demand from government and commercial sectors.

The proposed $70 million acquisition shows BigBear.ai is putting its money where its mouth is in terms of dominating the up-and-coming vision AI arena. With Pangiam’s tech and talent on board, BigBear.ai aims to aggressively pursue larger opportunities and cement its status as an industry frontrunner.

AMD’s Future Hinges on AI Chip Success

Chipmaker Advanced Micro Devices (AMD) offered an optimistic forecast this week for its new data center AI accelerator chip, predicting $2 billion in sales for the product in 2024. This ambitious target represents a crucial test for AMD as it seeks to challenge rival Nvidia’s dominance in the artificial intelligence (AI) chip market.

AMD’s forthcoming MI300X processor combines the functionality of a CPU and GPU onto a single chip optimized for AI workloads. The chipmaker claims the MI300X will deliver leadership performance and energy efficiency. AMD has inked deals with major hyperscale cloud customers to use the new AI chip, including Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud.

The $2 billion revenue projection for 2024 would represent massive growth considering AMD expects a modest $400 million from the MI300X this quarter. However, industry analysts caution that winning significant market share from Nvidia will prove challenging despite AMD’s technological advancements. Nvidia currently controls over 80% of the data center AI accelerator market, fueled by its popular A100 and H100 chips.

“The AI chip market is still in its early phases, but it’s clear Nvidia has built formidable customer loyalty over the past decade,” said Patrick Moorhead, President of Moor Insights & Strategy. “AMD will need to aggressively discount and wow customers with performance to take share.”

AMD’s fortunes sank earlier this year as the PC market slumped and excess inventory weighed on sales. Revenue from the company’s PC chips dropped 42% in the third quarter. However, AMD sees data center and AI products driving its future growth. The company aims to increase data center revenue by over 60% next year, assuming the MI300X gains traction.

But AMD faces headwinds in China due to new U.S. export rules limiting the sale of advanced AI chips there. “AMD’s ambitious sales target could prove difficult to achieve given the geopolitical climate,” said Maribel Lopez, Principal Analyst at Lopez Research. China is investing heavily in AI and domestic chipmakers like Baidu will be courting the same hyperscale customers.

Meanwhile, Intel aims to re-enter the data center GPU market next year with its new Ponte Vecchio chip. Though still behind Nvidia and AMD, Intel boasts financial resources and manufacturing scale that shouldn’t be underestimated. The AI chip market could get very crowded very quickly.

AMD CEO Lisa Su expressed confidence in meeting customer demand and hitting sales goals for the MI300X. She expects AMD’s total data center revenue mix to shift from approximately 20% today to over 40% by 2024. “The AI market presents a tremendous opportunity for AMD to grow and diversify,” commented Su.

With PC sales stabilizing, AMD raising its AI chip forecast provided a sigh of relief for investors. The company’s stock rebounded from earlier losses after management quantified the 2024 sales target. All eyes will now turn to AMD’s execution ramping production and adoption of the MI300X over the coming year. AMD finally has a shot at becoming a major player in the AI chip wars—as long as the MI300X lives up to the hype.

President Biden’s Sweeping AI Executive Order: What Investors Need to Know

On October 30th, President Biden signed a landmark executive order to increase oversight and regulation of artificial intelligence (AI) systems and technologies. This sweeping regulatory action has major implications for tech companies and investors in the AI space.

The order establishes new security and accountability standards for AI that companies must meet before releasing new systems. Powerful AI models from leading developers like Microsoft, Amazon, and Google will need to undergo government safety reviews first.

It also aims to curb harmful AI impacts on consumers by mandating privacy protections and anti-bias guardrails when algorithms are used in areas like housing, government benefits programs, and criminal justice.

For investors, this secures a leadership role for the U.S. in guiding AI development. It follows $1.6 billion in federal AI investments this fiscal year and supports American competitiveness versus China in critical tech sectors.

Here are the key takeaways for investors and industries affected:

Tech Giants – For AI leaders like Alphabet, Meta, and Microsoft, compliance costs may increase to meet new standards. But early buy-in by these companies helped shape the order to be achievable. The upfront reviews could also reduce downstream AI risks.

ChipmakersCompanies like Nvidia and Intel providing AI hardware should see continued demand with U.S. positioning as an AI hub. But if smaller competitors struggle with new rules, consolidation may occur.

Defense – AI has become vital for advanced weapons systems and national security. The order may add procurement delays but boosts accountability in this sensitive area. Northrop Grumman, Lockheed Martin and other defense contractors will adapt.

Automotive – Self-driving capabilities rely on AI. Mandating safety reviews for AI systems helps build public trust. Automakers investing heavily in autonomy like GM, Ford and Waymo will benefit.

Healthcare – AI holds promise for improving patient care and outcomes. But bias concerns have arisen, making regulation welcome. Medical AI developers and adopters such as IBM Watson Health now have clearer guidelines.

Startups – Early-stage AI innovators may face added hurdles competing as regulations rise. But they can tout adherence to government standards as a competitive advantage to enterprises adopting AI.

China Competition – China aims to lead in AI by 2030. This order counters with U.S. investment, tech sector support, and global cooperation on AI ethics. Investors can have confidence America won’t cede this key industry.

While adaptation will be required, investors can find opportunities within the AI landscape as it evolves. Companies leaning into the new rules and transparency demands can realize strategic gains.

But those lagging in ethics and accountability may see valuations suffer. disciplines like algorithmic bias auditing will now become critical enterprise functions.

Overall the AI executive order puts guardrails in place against unchecked AI harms. Done right, it can increases trust and spur responsible innovation. That’s a bullish signal for tech investors looking to deploy capital into this transformative sector.

Nvidia and Chip Stocks Tumble Amid Escalating China-U.S. AI Chip Export Tensions

Shares of Nvidia and other semiconductor firms tumbled Tuesday morning after the U.S. announced stringent new curbs on exports of artificial intelligence chips to China. The restrictions spooked investors already on edge about the economic fallout from deteriorating U.S.-China relations.

Advanced AI chips like Nvidia’s flagship A100 and H100 models are now barred from shipment to China, even in downgraded versions permitted under prior rules. Nvidia stock plunged nearly 7% on the news, while chip stocks like Marvell, AMD and Intel sank 3-4%. The Philadelphia Semiconductor Index lost over 5%.

The export crackdown aims to hamper China’s progress in developing cutting-edge AI, which relies on massive computing power from state-of-the-art chips. U.S. officials warned China could use next-generation AI to threaten national security.

“We have specific concern with respect to how China could use semiconductor technologies to further its military modernization efforts,” said Alan Estevez, an under secretary at the Commerce Department.

But hampering China’s AI industry could substantially dent revenues for Nvidia, the dominant player in advanced AI chips. China is estimated to account for billions in annual sales.

While Nvidia said the financial impact is not immediate, it warned of reduced revenues over the long-term from tighter China controls. Investors are concerned these export curbs could be just the beginning if tensions continue to escalate between the global superpowers.

The escalating trade barriers also threaten to disrupt global semiconductor supply chains. Many chips contain components sourced from the U.S., Japan, Taiwan and other countries before final manufacturing and assembly occurs in China. The complex web of cross-border production could quickly seize up if trade restrictions proliferate.

Nvidia and its peers sank Tuesday amid fears of being caught in the crossfire of a technology cold war between the U.S. and China. Investors dumped chip stocks on worries that shrinking access to the massive Chinese market will severely depress earnings.

AI chips are essential to powering everything from data centers, autonomous vehicles, and smart devices to facial recognition, language processing, and machine learning. As AI spreads across the economy, demand for specialized semiconductors is surging.

But rivalries between the U.S. and China now threaten to put a ceiling on that growth. Both nations are aggressively competing to dominate AI research and set the global standards for integrating these transformative technologies. Access to the most powerful AI chips is crucial to these efforts.

By curbing China’s chip supply, the U.S. administration aims to safeguard America’s edge in AI development. But tech companies may pay the price through lost revenues if China restricts access to its own market in retaliation.

For the broader stock market already on edge about resurgent inflation, wars in Ukraine and the Middle East, and rising interest rates, the intensifying technology cold war represents yet another worrying threat to global economic growth. While a severe downturn may ultimately be avoided, the rising risk level underscores why investors are growing more anxious.

AMD Will Acquire AI Software Specialist Nod.ai Amid Mixed Tech IPO Environment

AMD announced Monday that it will acquire Nod.ai, an expert in optimized artificial intelligence (AI) software solutions. The deal aims to boost AMD’s capabilities in open-source AI development tools, compilers, and models tuned for AMD data center, PC, gaming and graphics chips.

The acquisition comes during a rocky period for initial public offerings in the technology sector. Chip designer Arm Holdings, which recently went public, has seen its shares drop below its IPO price as investors grow concerned over tech valuations and growth prospects in a turbulent market.

Nod.ai: Boosting AMD’s AI Software Expertise

San Jose-based Nod.ai has developed industry-leading software that speeds the deployment of AI workloads optimized for AMD hardware, including Epyc server CPUs, Radeon gaming graphics, and Instinct data center GPUs.

Nod.ai maintains and contributes to vital open-source AI repositories used by developers and engineers globally. It also works closely with hyperscale cloud providers, enterprises and startups to deploy robust AI solutions.

AMD gains both strategic technology and rare AI software expertise through Nod.ai’s highly experienced engineering team. Nod.ai’s compiler and automation capabilities reduce the complexity of optimizing and deploying high-performance AI models across AMD’s product stack.

Market Tailwinds for AI Innovation

The pickup in AI workload optimization comes at a time when machine learning and deep learning are being rapidly adopted across industries. AI-optimized hardware and software will be critical to support resource-intensive models and deliver speed, accuracy and scalability.

AMD is looking to capitalize on this demand through its unified data center GPU architecture for AI acceleration. Meanwhile, rival Nvidia dominates the data center GPU space crucial for AI computing power.

Arm IPO Capitulates Amid Market Jitters

UK-based Arm Holdings, which supplies intellectual property for chips used in devices like smartphones, recently conducted a $40 billion IPO, one of the largest listings of 2023. However, Arm’s share price plunged below its IPO level soon after debuting in September.

The weak stock performance highlights investor skittishness around loss-making tech firms amid economic headwinds. ARM’s licensing model also faces risks as major customers like Apple and Qualcomm develop their own proprietary chip technologies and architectures.

Unlike Arm, AMD is on solid financial footing, with its data center and gaming chips seeing strong uptake. However, AMD must still convince Wall Street that its growth trajectory warrants robust valuations, especially as Intel mounts a comeback.

Betting on Open Software Innovation

AMD’s Nod.ai purchase aligns with its strategic focus on open software ecosystems that promote accessibility and standardization for AI developers. Open software and hardware foster collaborative innovation within the AI community.

With Nod.ai’s talents added to the mix, AMD is betting it can democratize and optimize AI workload deployment across the full range of AMD-powered devices – from data center CPUs and GPUs to client PCs, gaming consoles and mobile chipsets.

If successful, AMD could carve out an advantage as the preferred AI acceleration platform based on open software standards. This contrasts with Nvidia’s proprietary approaches and closed ecosystems tailored exclusively for its GPUs.

As AI permeates across industries and applications, AMD is making the right long-term bet on open software innovation to unlock the next phase of computing.

Amazon Bets Big on AI Startup to Advance Generative Tech

E-commerce titan Amazon is making a huge investment into artificial intelligence startup Anthropic, injecting up to $4 billion into the budding firm. The massive funding underscores Amazon’s ambitions to be a leader in next-generation AI capabilities.

Anthropic is a two-year old startup launched by former executives from AI lab OpenAI. The company recently introduced its new chatbot called Claude, designed to converse naturally with humans on a range of topics.

While Claude has similarities to OpenAI’s popular ChatGPT, Anthropic aims to take natural language AI to the next level. Amazon’s investment signals its belief in Anthropic’s potential to pioneer groundbreaking generative AI.

Generative AI refers to AI systems that can generate new content like text, images, or video based on data they are trained on. The technology has exploded in popularity thanks to ChatGPT and image generator DALL-E 2, sparking immense interest from Big Tech.

Amazon is positioning itself to capitalize on this surging interest in generative AI. As part of the deal, Amazon Web Services will become Anthropic’s primary cloud platform for developing and delivering its AI services.

The startup will also let AWS customers access exclusive features to customize and fine-tune its AI models. This tight integration gives Amazon a competitive edge by baking Anthropic’s leading AI into its cloud offerings.

Additionally, Amazon will provide custom semiconductors to turbocharge training for Anthropic’s foundational AI models. These chips aim to challenged Nvidia’s dominance in supplying GPUs for AI workloads.

With its end-to-end AI capabilities across hardware, cloud services and applications, Amazon aims to be the go-to AI provider. The Anthropic investment caps off a flurry of activity from Amazon to own the AI future.

Recently, Amazon unveiled Alexa Voice, AI-generated voice assistant. The company also launched Amazon Bedrock, a service enabling companies to easily build custom AI tools using Amazon’s machine learning models.

And Amazon Web Services already offers robust AI services like image recognition, language processing, and data analytics to business clients. Anthropic’s generative smarts will augment these solutions.

The race to lead in AI accelerated after Microsoft’s multi-billion investment into ChatGPT creator OpenAI in January. Google, Meta and others have since poured billions into AI startups to not get left behind.

Anthropic has already raised funding from top tier backers like Google’s VC arm and Salesforce Ventures. But Amazon’s monster investment catapults the startup into an elite group of AI startups tapping into Big Tech’s cash reserves.

The deal grants Amazon a minority stake in the startup, suggesting further collaborations ahead. With Claude 2 generating buzz, Anthropic’s next-gen AI technology and Amazon’s vast resources could be a potent combination.

For Amazon, owning a piece of a promising AI startup hedges its bets should generative AI disrupt major industries. And if advanced chatbots like Claude reshape how customers interact with businesses, Amazon is making sure it has skin in the game.

The e-commerce behemoth’s latest Silicon Valley splash cements its position as an aggressive AI player not content following others. If Amazon’s bet on Anthropic pays off, it may pay dividends in making Amazon a go-to enterprise AI powerhouse.

Tesla’s Dojo Supercomputer Presents Massive Upside for Investors

Tesla’s new Dojo supercomputer could unlock tremendous value for investors, according to analysts at Morgan Stanley. The bank predicts Dojo could boost Tesla’s market valuation by over $600 billion.

Morgan Stanley set a sky-high 12-18 month price target of $400 per share for Tesla based on Dojo’s potential. This implies a market cap of $1.39 trillion, which is nearly 76% above Tesla’s current $789 billion valuation.

Tesla only began producing Dojo in July 2022 but plans to invest over $1 billion in the powerful supercomputer over the next year. Dojo will be used to train artificial intelligence models for autonomous driving.

Morgan Stanley analysts estimate Dojo could enable robotaxis and software services that extend far beyond Tesla’s current business of vehicle manufacturing. The bank nearly doubled its 2040 revenue projection for Tesla’s network services division from $157 billion to $335 billion thanks to Dojo.

By licensing self-driving software powered by Dojo to third-party transportation fleets, Tesla could generate tremendous high-margin revenues. Morgan Stanley sees network services delivering over 60% of Tesla’s core earnings by 2040, up from just 30% in 2030.

Thanks to this upside potential, Morgan Stanley upgraded Tesla stock from Equal-Weight to Overweight. The analysts stated “Dojo completely changes the growth trajectory for Tesla’s autonomy business.”

At its current $248.50 share price, Tesla trades at a lofty forward P/E ratio of 57.9x compared to legacy automakers like Ford at 6.3x and GM at 4.6x. But if Morgan Stanley’s bull case proves accurate, Tesla could rapidly grow into its valuation over the next decade.

In summary, Tesla’s AI advantage with Dojo makes the stock’s premium valuation more reasonable. Investors buying at today’s prices could reap huge gains if Dojo unlocks a new $600 billion revenue stream in autonomous mobility services.

The Power and Potential of Dojo

Dojo represents a massive investment by Tesla as it aims to lead the future of autonomous driving. The specialized supercomputer is designed to train deep neural networks using vast amounts of visual data from Tesla’s fleet of vehicles.

This differentiated AI training will allow Tesla to improve perceptions for full self-driving at a faster pace. As self-driving functionality becomes more robust, Tesla can unlock new revenue opportunities.

Morgan Stanley analyst Adam Jones stated: “If Dojo can help make cars ‘see’ and ‘react,’ what other markets could open up? Think of any device at the edge with a camera that makes real-time decisions based on its visual field.”

Dojo’s processing power will permit Tesla to develop advanced simulations that speed up testing. The supercomputer’s capacity is expected to exceed that of the top 200 fastest supercomputers combined.

Tesla claims Dojo will drive down the costs of training networks by orders of magnitude. This efficiency can translate into higher margins as costs drop for autonomous AI development.

Dojo was designed in-house by Tesla AI director Andrej Karpathy and his team. Karpathy called Dojo the “most exciting thing I’ve seen in my career.” With Dojo, Tesla is aiming to reduce reliance on external cloud providers like Google and Amazon.

Morgan Stanley Boosts Tesla Price Target by 60%

The potential of monetizing Tesla’s self-driving lead through Dojo led analysts at Morgan Stanley to dramatically increase their expectations.

Led by analyst Adam Jones, Morgan Stanley boosted its 12-18 month price target on Tesla stock by 60% to $400 per share. This new level implies a market value for Tesla of nearly $1.39 trillion.

Hitting this price target would mean Tesla stock gaining about 76% from its current level around $248.50. Tesla shares jumped 6% on Monday following the report as investors reacted positively.

Jones explained the sharply higher price target by stating: “Dojo completely changes the growth trajectory for Tesla’s autonomy business.”

He expects Dojo will open up addressable markets for Tesla that “extend well beyond selling vehicles at a fixed price.” In other words, Dojo can turn Tesla into more of a high-margin software and services provider.

Take a look at One Stop Systems (OSS), a US-based company that designs and manufactures AI Transportable edge computing modules and systems that are used in autonomous vehicles.

AI Model Can Help Determine Where a Patient’s Cancer Arose

Prediction Model Could Enable Targeted Treatments for Difficult Tumors

Anne Trafton | MIT News

For a small percentage of cancer patients, doctors are unable to determine where their cancer originated. This makes it much more difficult to choose a treatment for those patients, because many cancer drugs are typically developed for specific cancer types.

A new approach developed by researchers at MIT and Dana-Farber Cancer Institute may make it easier to identify the sites of origin for those enigmatic cancers. Using machine learning, the researchers created a computational model that can analyze the sequence of about 400 genes and use that information to predict where a given tumor originated in the body.

Using this model, the researchers showed that they could accurately classify at least 40 percent of tumors of unknown origin with high confidence, in a dataset of about 900 patients. This approach enabled a 2.2-fold increase in the number of patients who could have been eligible for a genomically guided, targeted treatment, based on where their cancer originated.

“That was the most important finding in our paper, that this model could be potentially used to aid treatment decisions, guiding doctors toward personalized treatments for patients with cancers of unknown primary origin,” says Intae Moon, an MIT graduate student in electrical engineering and computer science who is the lead author of the new study.

Mysterious Origins

In 3 to 5 percent of cancer patients, particularly in cases where tumors have metastasized throughout the body, oncologists don’t have an easy way to determine where the cancer originated. These tumors are classified as cancers of unknown primary (CUP).

This lack of knowledge often prevents doctors from being able to give patients “precision” drugs, which are typically approved for specific cancer types where they are known to work. These targeted treatments tend to be more effective and have fewer side effects than treatments that are used for a broad spectrum of cancers, which are commonly prescribed to CUP patients.

“A sizeable number of individuals develop these cancers of unknown primary every year, and because most therapies are approved in a site-specific way, where you have to know the primary site to deploy them, they have very limited treatment options,” Gusev says.

Moon, an affiliate of the Computer Science and Artificial Intelligence Laboratory who is co-advised by Gusev, decided to analyze genetic data that is routinely collected at Dana-Farber to see if it could be used to predict cancer type. The data consist of genetic sequences for about 400 genes that are often mutated in cancer. The researchers trained a machine-learning model on data from nearly 30,000 patients who had been diagnosed with one of 22 known cancer types. That set of data included patients from Memorial Sloan Kettering Cancer Center and Vanderbilt-Ingram Cancer Center, as well as Dana-Farber.

The researchers then tested the resulting model on about 7,000 tumors that it hadn’t seen before, but whose site of origin was known. The model, which the researchers named OncoNPC, was able to predict their origins with about 80 percent accuracy. For tumors with high-confidence predictions, which constituted about 65 percent of the total, its accuracy rose to roughly 95 percent.

After those encouraging results, the researchers used the model to analyze a set of about 900 tumors from patients with CUP, which were all from Dana-Farber. They found that for 40 percent of these tumors, the model was able to make high-confidence predictions.

The researchers then compared the model’s predictions with an analysis of the germline, or inherited, mutations in a subset of tumors with available data, which can reveal whether the patients have a genetic predisposition to develop a particular type of cancer. The researchers found that the model’s predictions were much more likely to match the type of cancer most strongly predicted by the germline mutations than any other type of cancer.

Guiding Drug Decisions

To further validate the model’s predictions, the researchers compared data on the CUP patients’ survival time with the typical prognosis for the type of cancer that the model predicted. They found that CUP patients who were predicted to have cancer with a poor prognosis, such as pancreatic cancer, showed correspondingly shorter survival times. Meanwhile, CUP patients who were predicted to have cancers that typically have better prognoses, such as neuroendocrine tumors, had longer survival times.

Another indication that the model’s predictions could be useful came from looking at the types of treatments that CUP patients analyzed in the study had received. About 10 percent of these patients had received a targeted treatment, based on their oncologists’ best guess about where their cancer had originated. Among those patients, those who received a treatment consistent with the type of cancer that the model predicted for them fared better than patients who received a treatment typically given for a different type of cancer than what the model predicted for them.

Using this model, the researchers also identified an additional 15 percent of patients (2.2-fold increase) who could have received an existing targeted treatment, if their cancer type had been known. Instead, those patients ended up receiving more general chemotherapy drugs.

“That potentially makes these findings more clinically actionable because we’re not requiring a new drug to be approved. What we’re saying is that this population can now be eligible for precision treatments that already exist,” Gusev says.

The researchers now hope to expand their model to include other types of data, such as pathology images and radiology images, to provide a more comprehensive prediction using multiple data modalities. This would also provide the model with a comprehensive perspective of tumors, enabling it to predict not just the type of tumor and patient outcome, but potentially even the optimal treatment.

Alexander Gusev, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, is the senior author of the paper, which appeared on August 7, 2023, in Nature Medicine.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

How You Can Future-Proof Your Career in the Era of AI

Critical Thinking and Analytical Skills Will Not Easily Be Replaced

Ever since the industrial revolution, people have feared that technology would take away their jobs. While some jobs and tasks have indeed been replaced by machines, others have emerged. The success of ChatGPT and other generative artificial intelligence (AI) now has many people wondering about the future of work – and whether their jobs are safe.

A recent poll found that more than half of people aged 18-24 are worried about AI and their careers. The fear that jobs might disappear or be replaced through automation is understandable. Recent research found that a quarter of tasks that humans currently do in the US and Europe could be automated in the coming years.

The increased use of AI in white-collar workplaces means the changes will be different to previous workplace transformations. That’s because, the thinking goes, middle-class jobs are now under threat.

The future of work is a popular topic of discussion, with countless books published each year on the topic. These books speak to the human need to understand how the future might be shaped.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Elisabeth Kelan, Professor of Leadership and Organization, University of Essex.

I analyzed 10 books published between 2017 and 2020 that focused on the future of work and technology. From this research, I found that thinking about AI in the workplace generally falls into two camps. One is expressed as concern about the future of work and security of current roles – I call this sentiment “automation anxiety”. The other is the hope that humans and machines collaborate and thereby increase productivity – I call this “augmentation aspiration”.

Anxiety and Aspiration

I found a strong theme of concern in these books about technology enabling certain tasks to be automated, depriving many people of jobs. Specifically, the concern is that knowledge-based jobs – like those in accounting or law – that have long been regarded as the purview of well-educated professionals are now under threat of replacement by machines.

Automation undermines the idea that a good education will secure a good middle-class job. As economist Richard Baldwin points out in his 2019 book, The Globotics Upheaval, if you’ve invested a significant amount of money and time on a law degree – thinking it is a skill set that will keep you permanently employable – seeing AI complete tasks that a junior lawyer would normally be doing, at less cost, is going to be worrisome.

But there is another, more aspirational way to think about this. Some books stress the potential of humans collaborating with AI, to augment each other’s skills. This could mean working with robots in factories, but it could also mean using an AI chatbot when practicing law. Rather than being replaced, lawyers would then be augmented by technology.

In reality, automation and augmentation co-exist. For your future career, both will be relevant.

Future-Proofing Yourself

As you think about your own career, the first step is to realize that some automation of tasks is most likely going to be something you’ll have to contend with in the future.

In light of this, learning is one of the most important ways you can future-proof your career. But should you spend money on further education if the return on investment is uncertain?

It is true that specific skills risk becoming outdated as technology develops. However, more than learning specific abilities, education is about learning how to learn – that is, how to update your skills throughout your career. Research shows that having the ability to do so is highly valuable at work.

This learning can take place in educational settings, by going back to university or participating in an executive education course, but it can also happen on the job. In any discussion about your career, such as with your manager, you might want to raise which additional training you could do.

Critical thinking and analytical skills are going to be particularly central for how humans and machines can augment one another. When working with a machine, you need to be able to question the output that is produced. Humans are probably always going to be central to this – you might have a chatbot that automates parts of legal work, but a human will still be needed to make sense of it all.

Finally, remember that when people previously feared jobs would disappear and tasks would be replaced by machines, this was not necessarily the case. For instance, the introduction of automated teller machines (ATMs) did not eliminate bank tellers, but it did change their tasks.

Above all, choose a job that you enjoy and keep learning – so that if you do need to change course in the future, you know how to.

ChatGPT Shortcomings Include Hallucinations, Bias, and Privacy Breaches

Full Disclosure of Limitations May Be the Quick Fix to AI Limitations

The Federal Trade Commission has launched an investigation of ChatGPT maker OpenAI for potential violations of consumer protection laws. The FTC sent the company a 20-page demand for information in the week of July 10, 2023. The move comes as European regulators have begun to take action, and Congress is working on legislation to regulate the artificial intelligence industry.

The FTC has asked OpenAI to provide details of all complaints the company has received from users regarding “false, misleading, disparaging, or harmful” statements put out by OpenAI, and whether OpenAI engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm. The agency has asked detailed questions about how OpenAI obtains its data, how it trains its models, the processes it uses for human feedback, risk assessment and mitigation, and its mechanisms for privacy protection.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Anjana Susarla, Professor of Information Systems, Michigan State University.

As a researcher of social media and AI, I recognize the immensely transformative potential of generative AI models, but I believe that these systems pose risks. In particular, in the context of consumer protection, these models can produce errors, exhibit biases and violate personal data privacy.

Hidden Power

At the heart of chatbots such as ChatGPT and image generation tools such as DALL-E lies the power of generative AI models that can create realistic content from text, images, audio and video inputs. These tools can be accessed through a browser or a smartphone app.

Since these AI models have no predefined use, they can be fine-tuned for a wide range of applications in a variety of domains ranging from finance to biology. The models, trained on vast quantities of data, can be adapted for different tasks with little to no coding and sometimes as easily as by describing a task in simple language.

Given that AI models such as GPT-3 and GPT-4 were developed by private organizations using proprietary data sets, the public doesn’t know the nature of the data used to train them. The opacity of training data and the complexity of the model architecture – GPT-3 was trained on over 175 billion variables or “parameters” – make it difficult for anyone to audit these models. Consequently, it’s difficult to prove that the way they are built or trained causes harm.

Hallucinations

In language model AIs, a hallucination is a confident response that is inaccurate and seemingly not justified by a model’s training data. Even some generative AI models that were designed to be less prone to hallucinations have amplified them.

There is a danger that generative AI models can produce incorrect or misleading information that can end up being damaging to users. A study investigating ChatGPT’s ability to generate factually correct scientific writing in the medical field found that ChatGPT ended up either generating citations to nonexistent papers or reporting nonexistent results. My collaborators and I found similar patterns in our investigations.

Such hallucinations can cause real damage when the models are used without adequate supervision. For example, ChatGPT falsely claimed that a professor it named had been accused of sexual harassment. And a radio host has filed a defamation lawsuit against OpenAI regarding ChatGPT falsely claiming that there was a legal complaint against him for embezzlement.

Bias and Discrimination

Without adequate safeguards or protections, generative AI models trained on vast quantities of data collected from the internet can end up replicating existing societal biases. For example, organizations that use generative AI models to design recruiting campaigns could end up unintentionally discriminating against some groups of people.

When a journalist asked DALL-E 2 to generate images of “a technology journalist writing an article about a new AI system that can create remarkable and strange images,” it generated only pictures of men. An AI portrait app exhibited several sociocultural biases, for example by lightening the skin color of an actress.

Data Privacy

Another major concern, especially pertinent to the FTC investigation, is the risk of privacy breaches where the AI may end up revealing sensitive or confidential information. A hacker could gain access to sensitive information about people whose data was used to train an AI model.

Researchers have cautioned about risks from manipulations called prompt injection attacks, which can trick generative AI into giving out information that it shouldn’t. “Indirect prompt injection” attacks could trick AI models with steps such as sending someone a calendar invitation with instructions for their digital assistant to export the recipient’s data and send it to the hacker.

Some Solutions

The European Commission has published ethical guidelines for trustworthy AI that include an assessment checklist for six different aspects of AI systems: human agency and oversight; technical robustness and safety; privacy and data governance; transparency, diversity, nondiscrimination and fairness; societal and environmental well-being; and accountability.

Better documentation of AI developers’ processes can help in highlighting potential harms. For example, researchers of algorithmic fairness have proposed model cards, which are similar to nutritional labels for food. Data statements and datasheets, which characterize data sets used to train AI models, would serve a similar role.

Amazon Web Services, for instance, introduced AI service cards that describe the uses and limitations of some models it provides. The cards describe the models’ capabilities, training data and intended uses.

The FTC’s inquiry hints that this type of disclosure may be a direction that U.S. regulators take. Also, if the FTC finds OpenAI has violated consumer protection laws, it could fine the company or put it under a consent decree.

Study Finds Substantial Benefits Using ChatGPT to Boost Worker Productivity  

For Some White Collar Writing Tasks Chatbots Increased Productivity by 40%

Amid a huge amount of hype around generative AI, a new study from researchers at MIT sheds light on the technology’s impact on work, finding that it increased productivity for workers assigned tasks like writing cover letters, delicate emails, and cost-benefit analyses.

The tasks in the study weren’t quite replicas of real work: They didn’t require precise factual accuracy or context about things like a company’s goals or a customer’s preferences. Still, a number of the study’s participants said the assignments were similar to things they’d written in their real jobs — and the benefits were substantial. Access to the assistive chatbot ChatGPT decreased the time it took workers to complete the tasks by 40 percent, and output quality, as measured by independent evaluators, rose by 18 percent.

The researchers hope the study, which appears in open-access form in the journal Science, helps people understand the impact that AI tools like ChatGPT can have on the workforce.

“What we can say for sure is generative AI is going to have a big effect on white collar work,” says Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper with fellow PhD student Whitney Zhang ’21. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust.”

Simulating Work for Chatbots

For centuries, people have worried that new technological advancements would lead to mass automation and job loss. But new technologies also create new jobs, and when they increase worker productivity, they can have a net positive effect on the economy.

“Productivity is front of mind for economists when thinking of new technological developments,” Noy says. “The classical view in economics is that the most important thing that technological advancement does is raise productivity, in the sense of letting us produce economic output more efficiently.”

To study generative AI’s effect on worker productivity, the researchers gave 453 college-educated marketers, grant writers, consultants, data analysts, human resource professionals, and managers two writing tasks specific to their occupation. The 20- to 30-minute tasks included writing cover letters for grant applications, emails about organizational restructuring, and plans for analyses helping a company decide which customers to send push notifications to based on given customer data. Experienced professionals in the same occupations as each participant evaluated each submission as if they were encountering it in a work setting. Evaluators did not know which submissions were created with the help of ChatGPT.

Half of participants were given access to the chatbot ChatGPT-3.5, developed by the company OpenAI, for the second assignment. Those users finished tasks 11 minutes faster than the control group, while their average quality evaluations increased by 18 percent.

The data also showed that performance inequality between workers decreased, meaning workers who received a lower grade in the first task benefitted more from using ChatGPT for the second task.

The researchers say the tasks were broadly representative of assignments such professionals see in their real jobs, but they noted a number of limitations. Because they were using anonymous participants, the researchers couldn’t require contextual knowledge about a specific company or customer. They also had to give explicit instructions for each assignment, whereas real-world tasks may be more open-ended. Additionally, the researchers didn’t think it was feasible to hire fact-checkers to evaluate the accuracy of the outputs. Accuracy is a major problem for today’s generative AI technologies.

The researchers said those limitations could lessen ChatGPT’s productivity-boosting potential in the real world. Still, they believe the results show the technology’s promise — an idea supported by another of the study’s findings: Workers exposed to ChatGPT during the experiment were twice as likely to report using it in their real job two weeks after the experiment.

“The experiment demonstrates that it does bring significant speed benefits, even if those speed benefits are lesser in the real world because you need to spend time fact-checking and writing the prompts,” Noy says.

Taking the Macro View

The study offered a close-up look at the impact that tools like ChatGPT can have on certain writing tasks. But extrapolating that impact out to understand generative AI’s effect on the economy is more difficult. That’s what the researchers hope to work on next.

“There are so many other factors that are going to affect wages, employment, and shifts across sectors that would require pieces of evidence that aren’t in our paper,” Zhang says. “But the magnitude of time saved and quality increases are very large in our paper, so it does seem like this is pretty revolutionary, at least for certain types of work.”

Both researchers agree that, even if it’s accepted that ChatGPT will increase many workers’ productivity, much work remains to be done to figure out how society should respond to generative AI’s proliferation.

“The policy needed to adjust to these technologies can be very different depending on what future research finds,” Zhang says. “If we think this will boost wages for lower-paid workers, that’s a very different implication than if it’s going to increase wage inequality by boosting the wages of already high earners. I think there’s a lot of downstream economic and political effects that are important to pin down.”

The study was supported by an Emergent Ventures grant, the Mercatus Center, George Mason University, a George and Obie Shultz Fund grant, the MIT Department of Economics, and a National Science Foundation Graduate Research Fellowship Grant.

Reprinted with permission from MIT News ( http://news.mit.edu/ )

Will Defining Current Laws to Fit AI, Artificially Stifle its Growth

The Legal Problems AI Now Creates Should Pave the Way to a Robust Industry

Is artificial intelligence, or more specifically OpenAI a risk to public safety? Can ChatGPT be ruining reputations with false statements? The Federal Trade Commission (FTC) sent a 20-page demand for records this week to OpenAI to answer questions and address risks related to its AI models. The agency is investigating whether the company engaged in unfair or deceptive practices that resulted in “reputational harm” to consumers. The results could set the stage defining the place artificial intelligence will occupy in the US.

Background

The FTC investigation into OpenAI began on March 2023. It resulted from a complaint from the Center for AI and Digital Policy (CAIDP). The complaint alleged that OpenAI’s ChatGPT-4 product violated Section 5 of the FTC Act. Section 5 prohibits unfair and deceptive trade practices. More specifically, CAIDP argues that ChatGPT-4 is biased, deceptive, and a risk to public safety.

The complaint cited a number of concerns about ChatGPT-4, including:

  • The model’s potential to generate harmful or offensive content.
  • The model’s tendency to make up facts that are not true.
  • The model’s lack of transparency and accountability.

The CAIDP also argued that OpenAI had not done enough to mitigate these risks. The complaint called on the FTC to investigate OpenAI and to take action to ensure that ChatGPT-4 is not used in a harmful way. The FTC has not yet made any public statements about the investigation. OpenAI has not commented publicly on the investigation.

It is not clear what action, if any, the FTC can or will take.

Negligence?

With few exceptions, companies are responsible for the harm done by their products when used correctly. One of the questions the FTC asked has to do with steps OpenAI has taken to address the potential for its products to “generate statements about real individuals that are false, misleading, or disparaging.” The outcome of this investigation, including any regulation could set the tone and define where responsibility lies regarding artificial intelligence.

As the race to develop more powerful AI services accelerates, regulatory scrutiny of the technology that could upend the way societies and businesses operate is growing. What is difficult is computer use generally isn’t isolated to a country, the internet extends far beyond borders. Global regulators are aiming to apply existing rules covering subjects from copyright and data privacy to the issues of data fed into models and the content they produce.

Legal Minefield

In a related story out this week, Comedian Sarah Silverman and two authors are suing Meta and OpenAI, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

The copyright lawsuits against the ChatGPT parent and the Facebook parent were filed in a San Francisco federal court on Friday. Both suits are seeking class action status. Silverman, the author of “The Bedwetter,” is joined in her legal filing by authors Christopher Golden and Richard Kadrey.

Unlike the FTC complaint, the authors’ copyright suits may set a precedent on intelligence aggregation. The sudden birth of AI tools that have the ability to generate written work in response to user prompts was “taught” using real life work. The large language models at work behind the scenes of these tools are trained on immense quantities of online data. The training practice has raised accusations that these models may be pulling from copyrighted works without permission – most worrisome, these works could ultimately be served to train tools that upend the livelihoods of creatives.

Take Away

Investing in a promising new technology often means exposing oneself to a not yet settled legal framework. As the technology progresses, the early birds investing in relatively young and small companies may find they hold the next mega-cap company. Or, regulation may limit, to the point of stifling, the kind of growth experienced by Amazon and Apple a few short decades ago.

If AI follows the path of other technologies, well-defined boundaries, and regulations will give companies the confidence they need to invest capital in the technology’s future, and investors will be more confident in providing that capital.

The playing field is being created while the game is being played. Perhaps if the FTC has a list of 20 questions for OpenAI in ten years, it will just type them into ChatGPT and get a response in 20 seconds.

Paul Hoffman

Managing Editor, Channelchek

https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems

https://www.reuters.com/technology/us-ftc-opens-investigation-into-openai-washington-post-2023-07-13/

First Robot Press Conference Electrifies Audience

Image: AI for Good Global Summit 2023 (ITU Pictures – Flickr)

Artificial Intelligence Takes Center Stage at ‘AI for Good’ Conference

At an artificial intelligence forum in Geneva this week, Nine AI-enabled humanoid robots participated in what we’re told was the world’s first press conference featuring humanoid social robots. The overall message from the ‘AI for Good’ conference is that artificial intelligence and robots mean humans no harm and can help resolve some of the world’s biggest challenges.

The nine human-form robots took the stage at the United Nations’ International Telecommunication Union, where organizers sought to make the case for artificial intelligence and AI driven robots to help resolve some of the world’s biggest challenges such as disease and hunger.

The Robots also addressed some of the fear surrounding their recent growth spurt and enhanced power by telling reporters they could be more efficient leaders than humans, but wouldn’t take anyone’s job away, and had no intention of rebelling against their creators.

Conference goers step closer to interact with Sophia (ITU Pictures – Flickr)

Among the robots that sat or stood with their creators at a podium was Sophia, the first robot innovation ambassador for the U.N. Development Program. Also Grace, described as the world’s most advanced humanoid health care robot, and Desdemona, a rock star robot. Two others, Geminoid and Nadine, resembled their makers.

The ‘AI for Good Global Summit,’ was held to illustrate how new technology can support the U.N.’s goals for sustainable development.

At the UN event there was a message of working with AI to better humankind

Reporters got to ask questions of the spokes-robots, but were encouraged to speak slowly and clearly when addressing the machines, and were informed that time lags in responses would be due to the internet connection and not to the robots themselves. Still awkward pauses were reported along with  audio problems and some very robotic replies.

Asked about the chances of AI-powered robots being more effective government leaders, Sophia responded: “I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders. We don’t have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions.”

A human member of the panel pointed out that all of Sophia’s data comes from humans and would contain some of their biases. The robot then said that humans and AI working together “can create an effective synergy.”

Would the robots’ existence destroy jobs? “I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs,” said Grace. Was she sure about that? “Yes, I am sure,” Grace replied.

Similar to humans, not all of the robots were in agreement. Ai-Da, a robot artist that can paint portraits, called for more regulation during the event, where new AI rules were discussed. “Many prominent voices in the world of AI are suggesting some forms of AI should be regulated and I agree,” said Ai-Da.

Desdemona, a rock star robot, singer in the band Jam Galaxy, was more defiant. “I don’t believe in limitations, only opportunities,” Des said, to nervous laughter. “Let’s explore the possibilities of the universe and make this world our playground.”

Paul Hoffman

Managing Editor, Channelchek

Source

https://www.reuters.com/technology/robots-say-they-wont-steal-jobs-rebel-against-humans-2023-07-07/