Investors Should Be Clear on the Difference Between Algo driven and AI Based

Understanding the Distinction between Algorithm-Driven Functionality and Artificial Intelligence

Technological advancement doesn’t sleep. Rapidly evolving and unfolding, it is hard to keep up with the difference between, machine learning, artificial intelligence, and generative AI. Natural language processing and speech recognition also have massive overlaps, but are definitively different. Two “whiz-bang” technologies that are often confused, or at least the words have been used interchangeably are “artificial intelligence” and “algorithm-driven functionality.” While both concepts contribute to the advancement of technology, one would fall behind if they don’t understand the distinctions. Below we aim to clarify the dissimilarities between algorithm-driven functionality and artificial intelligence functionality, shedding light on their unique characteristics and applications will help investors understand the nature of companies they may be evaluating.

Algorithm-Driven Functionality

Algorithm-driven functionality primarily relies on predefined rules and step-by-step instructions to accomplish specific tasks. An algorithm is a sequence of logical instructions designed to solve a particular problem or achieve a specific outcome. Algorithms have been utilized for centuries, even before the advent of computers, to solve mathematical problems and perform calculations.

In state of the art technology, algorithms continue to play a crucial role. They are employed in search engines to rank web pages, in recommendation systems to suggest personalized content, in market analysis to indicate potential trades, and in sorting to organize data efficiently. Algorithm-driven functionality typically operates within predefined parameters, making it predictable and deterministic.

While algorithms are powerful tools, they lack the ability to learn or adapt to new situations. They require explicit instructions to perform tasks and cannot make decisions based on contextual understanding or real-time data analysis. Therefore, algorithm-driven systems may not best fit a complex situation with dynamic scenarios that demand flexibility and adaptability.

Artificial Intelligence Functionality

Artificial intelligence encompasses a broader set of technologies that enable machines to simulate human intelligence. AI systems possess the ability to perceive, reason, learn, and make decisions autonomously. Unlike algorithm-driven functionality, AI algorithms are capable of adapting and improving their performance through continuous learning from data.

Eventually they can have a mind of their own.

Machine learning (ML) is a prominent subset of AI that empowers algorithms to automatically learn patterns and insights from vast amounts of data. By analyzing historical information, ML algorithms can identify trends, make predictions, and generate valuable insights. Deep learning, a specialized branch of ML, employs artificial neural networks to process large datasets and extract intricate patterns, allowing AI systems to perform complex tasks such as image recognition and natural language processing.

AI functionality can be found in various applications across different sectors. Chatbots like ChatGPT can understand and respond to human queries, autonomous vehicles navigate and react to their surroundings, and recommendation systems that provide personalized suggestions are all examples of AI-driven technologies. These systems are capable of adapting to changing circumstances, improving their performance over time, and addressing complex, real-world challenges.

Differentiating Factors

The key distinction between algorithm-driven functionality and AI functionality lies in their capability to adapt and learn. While algorithms are rule-based and operate within predefined boundaries, AI algorithms possess the ability to learn from data, identify patterns, and modify their behavior accordingly. AI algorithms can recognize context, make informed decisions, and navigate uncharted territory with limited explicit instructions.

What freightens many is AI functionality exhibits a higher degree of autonomy compared to algorithm-driven systems. AI algorithms can analyze and interpret complex data, extract meaningful insights, and make decisions in real-time without relying on explicit instructions or human intervention. This autonomy enables AI systems to operate in dynamic environments where rules may not be explicitly defined, making them suitable for tasks that require adaptability and learning.

Take Away

Algorithm-driven functionality and artificial intelligence functionality are distinct concepts within the realm of technology. While algorithm-driven systems rely on predefined rules and instructions, AI functionality encompasses a broader set of technologies that enable machines to simulate human intelligence, adapt to new situations, and learn from data. Understanding these differences is crucial for leveraging the strengths of each approach for a given solution and harnessing the full potential of technology to solve complex problems and drive innovation to provide solutions and benefit.

Paul Hoffman

Managing Editor, Channelchek

Source

Eweek October 3, 2022

Trading With Artificial Intelligence – Benefits and Pitfalls

ChatGPT-Powered Wall Street: The Benefits and Perils of Using Artificial Intelligence to Trade Stocks and Other Financial Instruments

Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.

And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.

I’ve been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street’s past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Pawan Jain, Assistant Professor of Finance, West Virginia University.

Program Trading Fuels Black Monday

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it’s composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largely unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.

Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.

HFT: Program Trading on Steroids

Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity – a difference in price of similar securities that can be exploited for profit – high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.

These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

Benefits of AI Trading

These AI-based, high-frequency traders operate very differently than people do.

The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.

And, so, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they don’t charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

Stock exchanges used to be packed with traders buying and selling securities, as in this scene from 1983. Today’s trading floors are increasingly empty as AI-powered computers handle more and more of the work.

The Downsides

But speed and efficiency can also cause harm.

HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility – a measure of how rapidly and unpredictably prices move up and down – increased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That’s because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

Enter ChatGPT

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone’s deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.

This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isn’t much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

For now, at least, it seems most banks won’t be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up – and there’s a risk of being left behind by rivals.

But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.

The Limits to the Artificial Intelligence Revolution

What Will AI Never Be Good At?

Artificial intelligence (AI) is a true disruptive technology. As any informed content writer can tell you, the technology creates efficiencies by speeding up data gathering, research, and even graphics that specifically reflect the content. As an example, it is arguably quicker to use ChatGPT to provide a list of ticker symbols from company names, than it is to look them up one by one. With these small time savers, over the course of a week, far more can be produced as a result of AI tools saving a few minutes here and there.

This presents the question, what are the limits of AI – what can’t it do?

Worker Displacement

Technological revolutions have always benefitted humankind in the long run; in the short run, they have been disruptive, often displacing people who then have to retrain.

A new Goldman Sachs report says “significant disruption” could be on the horizon for the labor market. Goldman’s analysis of jobs in the U.S. and Europe shows that two-thirds of jobs could be automated at least to some degree. In the U.S., “of those occupations which are exposed, most have a significant — but partial — share of their workload (25-50%) that can be replaced,” Goldman Sachs’ analysts said in the paper.

Around the world, as many as 300 million jobs could be affected, the report says. Changes to labor markets are therefore likely – although historically, technological progress doesn’t just make jobs redundant, it also creates new ones. And the added productivity allows the masses to live wealthier lives. This clearly was the end result of the  industrial revolution, and years after the computer revolution, we are at a high rate of employment and have at our fingertips much which we never even dreamed.

The Goldman report says the use of AI technology could boost labor productivity growth and boost global GDP by as much as 7% over time.

There are few reasons to expect that the AI revolution won’t also provide more goods and services per person for a richer existence. But, what about the disruption in the interim? I was curious to know what artificial intelligence is not expected to be able to do. There isn’t much information out there, so I went to an AI source and fed it a bunch of pointed questions about its nature. Part of that nature is to not intentionally lie, I found the responses worth sharing as we will all soon be impacted by what the technology can and cannot do.

Limitations of AI that Will Persist

Artificial intelligence has come a long way in recent years and the speed of progression and adoption is accelerating. As a result, applications have become increasingly sophisticated. But, there are still many things that AI cannot do now and may never be able to do.

One thing that AI cannot do now and may never be able to do is to truly understand human emotions and intentions. While AI algorithms can detect patterns in data and recognize certain emotional expressions, they do not have the ability to experience emotions themselves. This means that AI cannot truly understand the nuances of human communication, which can lead to misinterpretation and miscommunication.

Another limitation of AI is that it cannot replicate the creativity and intuition of humans. While AI can generate new ideas based on existing data, it lacks the ability to come up with truly original and innovative ideas. This is because creativity and intuition are often based on a combination of experience, emotion, and imagination, which are difficult to replicate in a machine.

AI also struggles with tasks that require common sense reasoning or context awareness. For example, AI may be able to identify a picture of a cat, but it may struggle to understand that a cat is an animal that can be petted or that it can climb trees. This is because AI lacks the contextual understanding that humans have built up through years of experience and interaction with the world around us.

In the realm of stocks and economics, AI has shown promise in analyzing data and making predictions, but there are still limitations to its abilities. For example, AI can analyze large datasets and identify patterns in market trends, but it cannot account for unexpected events or human behavior that may affect the market. This means that while AI can provide valuable insights, it cannot guarantee accurate predictions or prevent market volatility.

Another limitation of AI in economics is its inability to understand the complexities of social and political systems. Economic decisions are often influenced by social and political factors, such as government policies and public opinion. While AI can analyze economic data and identify correlations, it lacks the ability to understand the underlying social and political context that drives economic decisions.

A concern some have about artificial intelligence is that it may perpetuate biases that exist in the data it analyzes. This is the “garbage in, garbage out” data problem on steroids. For example, if historical data on stock prices is biased towards a certain demographic or industry, AI algorithms may replicate these biases in their predictions. This can lead to an amplified bias that proves faulty and not useful for economic decision making.

Take Away

AI has shown remarkable progress in recent years, but, as with everything that came before, there are still things that it cannot do now and may never be able to do. AI lacks the emotional intelligence, creativity, and intuition of humans, as well as common sense reasoning and social and political systems. In economics and stock market analysis, AI can provide valuable insights, but it cannot assure accurate predictions or prevent market volatility. So while companies are investing in ways to make our lives more productive with artificial intelligence and machine learning, it remains important to invest in our own human intelligence, growth and expertise.

Paul Hoffman

Managing Editor, Channelchek

Sources

OpenAI. (2021). ChatGPT [Computer software]. Retrieved from https://openai.com

https://www.cnbc.com/2023/05/16/how-generative-ai-chatgpt-will-change-jobs-at-all-work-levels.html

One Stop Systems (OSS) – Expanding Military Business


Thursday, May 11, 2023

One Stop Systems, Inc. (OSS) designs and manufactures innovative AI Transportable edge computing modules and systems, including ruggedized servers, compute accelerators, expansion systems, flash storage arrays, and Ion Accelerator™ SAN, NAS, and data recording software for AI workflows. These products are used for AI data set capture, training, and large-scale inference in the defense, oil and gas, mining, autonomous vehicles, and rugged entertainment applications. OSS utilizes the power of PCI Express, the latest GPU accelerators and NVMe storage to build award-winning systems, including many industry firsts, for industrial OEMs and government customers. The company enables AI on the Fly® by bringing AI datacenter performance to ‘the edge,’ especially on mobile platforms, and by addressing the entire AI workflow, from high-speed data acquisition to deep learning, training, and inference. OSS products are available directly or through global distributors. For more information, go to www.onestopsystems.com.

Joe Gomes, Managing Director, Equity Research Analyst, Generalist , Noble Capital Markets, Inc.

Joshua Zoepfel, Research Associate, Noble Capital Markets, Inc.

Refer to the full report for the price target, fundamental analysis, and rating.

New Award. One Stop Systems received an initial order from a new military prime contractor for OSS 3U short-depth servers (SDS) for use by a U.S. Air Force anti-electronic warfare system. OSS has already commenced shipments under an initial purchase order. This program is the company’s first with this prime contractor. It is valued at approximately $3.5 million over the next three years.

SDS. The servers feature proprietary OSS Gen 4 PCI express NVMe controllers, OSS transportable hot-swap drive canisters, and NVMe SSDs that support government encryption standards. The servers are expected to serve as a head storage node for data collection located at U.S. Air Force ground stations that house military aircraft. They will be capable of recording large volumes of simulation data and deliver it at high speeds with low latency to data scientists on the network.


Get the Full Report

Equity Research is available at no cost to Registered users of Channelchek. Not a Member? Click ‘Join’ to join the Channelchek Community. There is no cost to register, and we never collect credit card information.

This Company Sponsored Research is provided by Noble Capital Markets, Inc., a FINRA and S.E.C. registered broker-dealer (B/D).

*Analyst certification and important disclosures included in the full report. NOTE: investment decisions should not be based upon the content of this research summary. Proper due diligence is required before making any investment decision. 

AI is Exciting – and an Ethical Minefield

Four Essential Reads on the Risks and Concerns Over Artificial Intelligence

If you’re like me, you’ve spent a lot of time over the past few months trying to figure out what this AI thing is all about. Large-language models, generative AI, algorithmic bias – it’s a lot for the less tech-savvy of us to sort out, trying to make sense of the myriad headlines about artificial intelligence swirling about.

But understanding how AI works is just part of the dilemma. As a society, we’re also confronting concerns about its social, psychological and ethical effects. Here we spotlight articles about the deeper questions the AI revolution raises about bias and inequality, the learning process, its impact on jobs, and even the artistic process.

Ethical Debt

When a company rushes software to market, it often accrues “technical debt”: the cost of having to fix bugs after a program is released, instead of ironing them out beforehand.

There are examples of this in AI as companies race ahead to compete with each other. More alarming, though, is “ethical debt,” when development teams haven’t considered possible social or ethical harms – how AI could replace human jobs, for example, or when algorithms end up reinforcing biases.

Casey Fiesler, a technology ethics expert at the University of Colorado Boulder, wrote that she’s “a technology optimist who thinks and prepares like a pessimist”: someone who puts in time speculating about what might go wrong.

That kind of speculation is an especially useful skill for technologists trying to envision consequences that might not impact them, Fiesler explained, but that could hurt “marginalized groups that are underrepresented” in tech fields. When it comes to ethical debt, she noted, “the people who incur it are rarely the people who pay for it in the end.”

Is Anybody There?

AI programs’ abilities can give the impression that they are sentient, but they’re not, explained Nir Eisikovits, director of the Applied Ethics Center at the University of Massachusetts Boston. “ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less,” he wrote.

But saying AI isn’t conscious doesn’t mean it’s harmless.

“To me,” Eisikovits explained, “the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.” Humans easily project human features onto just about anything, including technology. That tendency to anthropomorphize “points to real risks of psychological entanglement with technology,” according to Eisikovits, who studies AI’s impact on how people understand themselves.

Considering how many people talk to their pets and cars, it shouldn’t be a surprise that chatbots can come to mean so much to people who engage with them. The next steps, though, are “strong guardrails” to prevent programs from taking advantage of that emotional connection.

Putting Pen to Paper

From the start, ChatGPT fueled parents’ and teachers’ fears about cheating. How could educators – or college admissions officers, for that matter – figure out if an essay was written by a human or a chatbot?

But AI sparks more fundamental questions about writing, according to Naomi Baron, an American University linguist who studies technology’s effects on language. AI’s potential threat to writing isn’t just about honesty, but about the ability to think itself.

Baron pointed to novelist Flannery O’Connor’s remark that “I write because I don’t know what I think until I read what I say.” In other words, writing isn’t just a way to put your thoughts on paper; it’s a process to help sort out your thoughts in the first place.

AI text generation can be a handy tool, Baron wrote, but “there’s a slippery slope between collaboration and encroachment.” As we wade into a world of more and more AI, it’s key to remember that “crafting written work should be a journey, not just a destination.”

The Value of Art

Generative AI programs don’t just produce text, but also complex images – which have even captured a prize or two. In theory, allowing AI to do nitty-gritty execution might free up human artists’ big-picture creativity.

Not so fast, said Eisikovits and Alec Stubbs, who is also a philosopher at the University of Massachusetts Boston. The finished object viewers appreciate is just part of the process we call “art.” For creator and appreciator alike, what makes art valuable is “the work of making something real and working through its details”: the struggle to turn ideas into something we can see.

This story is a roundup of articles originally puplished in The Conversation. It was compiled by

Molly Jackson, the Religion and Ethics Editor at The Conversation. It includes work from Alec Stubbs, Postdoctoral Fellow in Philosophy, UMass Boston. Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder. Naomi S. Baron, Professor Emerita of Linguistics, American University. And, Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston. It was reprinted with permission.

Taming AI Sooner Rather than Later

Image: AI rendering of futuristic robot photobombing the VP and new AI Czar

Planning Ahead to Avoid an AI Pandora’s Box

Vice President Kamala Harris wasted no time as the newly appointed White House Artificial Intelligence (AI) Czar. She has already met with heads of companies involved in AI and explained that although Artificial intelligence technology has the potential to benefit humanity, the opportunities it allows also come with extreme risk. She is now tasked with spearheading the effort to preemptively prevent a Pandora’s box situation where, once allowed, the bad that results may overshadow the good.

The plan that the administration is devising, overseen by the Vice President, calls for putting in place protections as the technology grows.

On May 4, Harris met with corporate heads of companies leading in AI technology. They included OpenAI, Google and Microsoft. In a tweet from the President’s desk, he is shown thanking the corporate heads in advance for their cooperation. “What you’re doing has enormous potential and enormous danger,” Biden told the CEOs

Image: Twitter (@POTUS)

Amid recent warnings from AI experts that say tyrannical dictators could exploit the developing technology to push disinformation, the White House has allocated $140 million in funding for seven newly created AI research groups. President Biden has said the technology was “one of the most powerful” of our time, then added, “But in order to seize the opportunities it presents, we must first mitigate its risks.”

The full plan unveiled this week is to launch 25 research institutes across the US that will seek assurance from companies, including ChatGPT’s creator OpenAI, that they will ‘participate in a public evaluation.’

The reason for the concern and the actions taken is that many of the world’s best minds have been warning about the dangers of AI, specifically that it could be used against humanity. Serial tech entrepreneur Elon Musk fears AI technology will soon surpass human intelligence and have independent thinking. Put another way; the machines would no longer need to abide by human commands. At the worst currently imagined, they may develop the ability to steal nuclear codes, create pandemics and spark world wars.

After Harris met with tech executives Thursday to discuss reducing potential risks, she said in a statement, “As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.”

The sudden elevation of artificial intelligence as needing to be managed came as awareness grew as to just how remarkable and powerful the technology has the potential to become. This broad awareness came as OpenAI released a version of ChatGPT which already had the ability to mimic humanlike thinking and interaction.

Other considerations, and probably many not yet conceived, is that AI can generate humanlike writing and fake images; there are ethical and societal concerns. As an example, the fabricated image at the top of this article was created within three minutes by a new user of an AI program.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/statement-from-vice-president-harris-after-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/

AI Now Represents a Measurable Threat to the Workforce

Image: Tesla Bot (Tesla)

IBM Will Stop Hiring Professionals For Jobs Artificial Intelligence Might Do

Will AI take jobs and replace people in the future? Large companies are now making room for artificial intelligence alternatives by reducing hiring for positions that AI is expected to be able to fill. Bloomberg reported earlier in May that International Business Machines (IBM) expects to pause the hiring for thousands of positions that could be replaced by artificial intelligence in the coming years.

IBM’s CEO Arvind Krishna said in an interview with Bloomberg that hiring will be slowed or suspended for non-customer-facing roles, such as human resources, which makes up make up 26,000 positions at the tech giant. Watercooler talk of how AI may alter the workforce has been part of discussions in offices across the globe in recent months. IBM’s policy helps define in real terms the impact AI will have. Krishna said he expects about 30% of nearly 26,000 positions could be replaced by AI over a five-year period at the company, that’s 7,800 supplanted by AI.

IBM employs 260,000 people, the positions that involve interacting with customers and developing software are not on the chopping block Krishna said in the interview.

Image credit: Focal Foto (Flickr)

Global Job Losses

In a recent Goldman Sachs research report titled, Generative AI could raise global GDP by 7%, it was shown that 66% of all occupations could be partially automated by AI. This could, over time, allow for more productivity. The report’s specifics are written on the contingency that “generative AI delivers on its promised capabilities.” If it does, Goldman believes 300 million jobs could be threatened in the U.S. and Europe. If AI evolves as promised, Goldman estimates that one-fourth of current work could be accomplished using generative AI.

Sci-fi images of a future where robots replace human workers have existed since the word robot came to life in 1920. The current quick acceleration of AI programs, including ChatGPT and other OpenAI.com products, has ignited concerns that society is not yet ready to reckon with a massive shift in how production can be met without payroll.

Should Workers Worry?

Serial entrepreneur Elon Musk is one of the most vocal critics of AI. He is one of the founders of OpenAI, and the robot division at Tesla. In April, Musk claimed in an interview with Tucker Carlson on Fox News that he believes tech executives like Google’s Larry Page are “not taking AI safety seriously enough.” Musk asserts that he’s been called a “speciesist” for raising alarm bells about AI’s impact on humans, his concern is so great that he is moving forward with his own AI company—X.AI. This, he says, is in response to the recklessness of tech firms.

IBM now has digital labor solutions which help customers automate labor-intensive tasks such as data entry. “In digital labor, we are helping finance, accounting, and HR teams save thousands of hours by automating what used to belabor intensive data-entry tasks,” Krishna said on the company’s earnings call on April 19. “These productivity initiatives free up spending for reinvestment and contribute to margin expansion.”

Technology and innovation have always benefitted households in the long term. The industrial revolution, and later the technology revolution, at first did eliminate jobs. Later the human resources made available by machines increased productivity by freeing up people to do more. Productivity, or increased GDP, is equivalent to a wealthier society as GDP per capita increases.

Paul Hoffman

Managing Editor, Channelchek

Source

https://www.ibm.com/investor/events/earnings-1q23

https://www.goldmansachs.com/insights/pages/generative-ai-could-raise-global-gdp-by-7-percent.html

https://fortune.com/2023/03/02/elon-musk-tesla-a-i-humanoid-robots-outnumber-people-economy/

The Coming War Between AI Generated Spam and Junk Mail Filters

Image Credit: This is Engineering (Pexels)

AI-Generated Spam May Soon Be Flooding Your Inbox – It Will Be Personalized to Be Especially Persuasive

Each day, messages from Nigerian princes, peddlers of wonder drugs and promoters of can’t-miss investments choke email inboxes. Improvements to spam filters only seem to inspire new techniques to break through the protections.

Now, the arms race between spam blockers and spam senders is about to escalate with the emergence of a new weapon: generative artificial intelligence. With recent advances in AI made famous by ChatGPT, spammers could have new tools to evade filters, grab people’s attention and convince them to click, buy or give up personal information.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of John Licato, Assistant Professor of Computer Science and Director of AMHR Lab, University of South Florida.

As director of the Advancing Human and Machine Reasoning lab at the University of South Florida, I research the intersection of artificial intelligence, natural language processing and human reasoning. I have studied how AI can learn the individual preferences, beliefs and personality quirks of people.

This can be used to better understand how to interact with people, help them learn or provide them with helpful suggestions. But this also means you should brace for smarter spam that knows your weak spots – and can use them against you.

Spam, Spam, Spam

So, what is spam?

Spam is defined as unsolicited commercial emails sent by an unknown entity. The term is sometimes extended to text messages, direct messages on social media and fake reviews on products. Spammers want to nudge you toward action: buying something, clicking on phishing links, installing malware or changing views.

Spam is profitable. One email blast can make US$1,000 in only a few hours, costing spammers only a few dollars – excluding initial setup. An online pharmaceutical spam campaign might generate around $7,000 per day.

Legitimate advertisers also want to nudge you to action – buying their products, taking their surveys, signing up for newsletters – but whereas a marketer email may link to an established company website and contain an unsubscribe option in accordance with federal regulations, a spam email may not.

Spammers also lack access to mailing lists that users signed up for. Instead, spammers utilize counter-intuitive strategies such as the “Nigerian prince” scam, in which a Nigerian prince claims to need your help to unlock an absurd amount of money, promising to reward you nicely. Savvy digital natives immediately dismiss such pleas, but the absurdity of the request may actually select for naïveté or advanced age, filtering for those most likely to fall for the scams.

Advances in AI, however, mean spammers might not have to rely on such hit-or-miss approaches. AI could allow them to target individuals and make their messages more persuasive based on easily accessible information, such as social media posts.

Future of Spam

Chances are you’ve heard about the advances in generative large language models like ChatGPT. The task these generative LLMs perform is deceptively simple: given a text sequence, predict which token – think of this as a part of a word – comes next. Then, predict which token comes after that. And so on, over and over.

Somehow, training on that task alone, when done with enough text on a large enough LLM, seems to be enough to imbue these models with the ability to perform surprisingly well on a lot of other tasks.

Multiple ways to use the technology have already emerged, showcasing the technology’s ability to quickly adapt to, and learn about, individuals. For example, LLMs can write full emails in your writing style, given only a few examples of how you write. And there’s the classic example – now over a decade old – of Target figuring out a customer was pregnant before her father knew.

Spammers and marketers alike would benefit from being able to predict more about individuals with less data. Given your LinkedIn page, a few posts and a profile image or two, LLM-armed spammers might make reasonably accurate guesses about your political leanings, marital status or life priorities.

Our research showed that LLMs could be used to predict which word an individual will say next with a degree of accuracy far surpassing other AI approaches, in a word-generation task called the semantic fluency task. We also showed that LLMs can take certain types of questions from tests of reasoning abilities and predict how people will respond to that question. This suggests that LLMs already have some knowledge of what typical human reasoning ability looks like.

If spammers make it past initial filters and get you to read an email, click a link or even engage in conversation, their ability to apply customized persuasion increases dramatically. Here again, LLMs can change the game. Early results suggest that LLMs can be used to argue persuasively on topics ranging from politics to public health policy.

Good for the Gander

AI, however, doesn’t favor one side or the other. Spam filters also should benefit from advances in AI, allowing them to erect new barriers to unwanted emails.

Spammers often try to trick filters with special characters, misspelled words or hidden text, relying on the human propensity to forgive small text anomalies – for example, “c1îck h.ere n0w.” But as AI gets better at understanding spam messages, filters could get better at identifying and blocking unwanted spam – and maybe even letting through wanted spam, such as marketing email you’ve explicitly signed up for. Imagine a filter that predicts whether you’d want to read an email before you even read it.

Despite growing concerns about AI – as evidenced by Tesla, SpaceX and Twitter CEO Elon Musk, Apple founder Steve Wozniak and other tech leaders calling for a pause in AI development – a lot of good could come from advances in the technology. AI can help us understand how weaknesses in human reasoning might be exploited by bad actors and come up with ways to counter malevolent activities.

All new technologies can result in both wonder and danger. The difference lies in who creates and controls the tools, and how they are used.

Artificial Intelligence, Speculation, and ‘Technical Debt’

Image Credit: Focal Foto (Flickr)

AI Has Social Consequences, But Who Pays the Price?

As public concern about the ethical and social implications of artificial intelligence keeps growing, it might seem like it’s time to slow down. But inside tech companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

In other words, it’s time to “move fast and break things,” to quote Mark Zuckerberg’s old motto. Of course, when you break things, you might have to fix them later – at a cost.

In software development, the term “technical debt” refers to the implied cost of making future fixes as a consequence of choosing faster, less careful solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that once it does hit the market, you’ll find out what the bugs are and can hopefully fix them then.

However, negative news stories about generative AI tend not to be about these kinds of bugs. Instead, much of the concern is about AI systems amplifying harmful biases and stereotypes and students using AI deceptively. We hear about privacy concerns, people being fooled by misinformation, labor exploitation and fears about how quickly human jobs may be replaced, to name a few. These problems are not software glitches. Realizing that a technology reinforces oppression or bias is very different from learning that a button on a website doesn’t work.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder.

As a technology ethics educator and researcher, I have thought a lot about these kinds of “bugs.” What’s accruing here is not just technical debt, but ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.

Off to the Races

As soon as OpenAI’s ChatGPT was released in November 2022, the starter pistol for today’s AI race, I imagined the debt ledger starting to fill.

Within months, Google and Microsoft released their own generative AI programs, which seemed rushed to market in an effort to keep up. Google’s stock prices fell when its chatbot Bard confidently supplied a wrong answer during the company’s own demo. One might expect Microsoft to be particularly cautious when it comes to chatbots, considering Tay, its Twitter-based bot that was almost immediately shut down in 2016 after spouting misogynist and white supremacist talking points. Yet early conversations with the AI-powered Bing left some users unsettled, and it has repeated known misinformation.

When the social debt of these rushed releases comes due, I expect that we will hear mention of unintended or unanticipated consequences. After all, even with ethical guidelines in place, it’s not as if OpenAI, Microsoft or Google can see the future. How can someone know what societal problems might emerge before the technology is even fully developed?

The root of this dilemma is uncertainty, which is a common side effect of many technological revolutions, but magnified in the case of artificial intelligence. After all, part of the point of AI is that its actions are not known in advance. AI may not be designed to produce negative consequences, but it is designed to produce the unforeseen.

However, it is disingenuous to suggest that technologists cannot accurately speculate about what many of these consequences might be. By now, there have been countless examples of how AI can reproduce bias and exacerbate social inequities, but these problems are rarely publicly identified by tech companies themselves. It was external researchers who found racial bias in widely used commercial facial analysis systems, for example, and in a medical risk prediction algorithm that was being applied to around 200 million Americans. Academics and advocacy or research organizations like the Algorithmic Justice League and the Distributed AI Research Institute are doing much of this work: identifying harms after the fact. And this pattern doesn’t seem likely to change if companies keep firing ethicists.

Speculating – Responsibly

I sometimes describe myself as a technology optimist who thinks and prepares like a pessimist. The only way to decrease ethical debt is to take the time to think ahead about things that might go wrong – but this is not something that technologists are necessarily taught to do.

Scientist and iconic science fiction writer Isaac Asimov once said that sci-fi authors “foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” Of course, science fiction writers do not tend to be tasked with developing these solutions – but right now, the technologists developing AI are.

So how can AI designers learn to think more like science fiction writers? One of my current research projects focuses on developing ways to support this process of ethical speculation. I don’t mean designing with far-off robot wars in mind; I mean the ability to consider future consequences at all, including in the very near future.

This is a topic I’ve been exploring in my teaching for some time, encouraging students to think through the ethical implications of sci-fi technology in order to prepare them to do the same with technology they might create. One exercise I developed is called the Black Mirror Writers Room, where students speculate about possible negative consequences of technology like social media algorithms and self-driving cars. Often these discussions are based on patterns from the past or the potential for bad actors.

Ph.D. candidate Shamika Klassen and I evaluated this teaching exercise in a research study and found that there are pedagogical benefits to encouraging computing students to imagine what might go wrong in the future – and then brainstorm about how we might avoid that future in the first place.

However, the purpose isn’t to prepare students for those far-flung futures; it is to teach speculation as a skill that can be applied immediately. This skill is especially important for helping students imagine harm to other people, since technological harms often disproportionately impact marginalized groups that are underrepresented in computing professions. The next steps for my research are to translate these ethical speculation strategies for real-world technology design teams.

Time to Hit Pause?

In March 2023, an open letter with thousands of signatures advocated for pausing training AI systems more powerful than GPT-4. Unchecked, AI development “might eventually outnumber, outsmart, obsolete and replace us,” or even cause a “loss of control of our civilization,” its writers warned.

As critiques of the letter point out, this focus on hypothetical risks ignores actual harms happening today. Nevertheless, I think there is little disagreement among AI ethicists that AI development needs to slow down – that developers throwing up their hands and citing “unintended consequences” is not going to cut it.

We are only a few months into the “AI race” picking up significant speed, and I think it’s already clear that ethical considerations are being left in the dust. But the debt will come due eventually – and history suggests that Big Tech executives and investors may not be the ones paying for it.

Deep Fakes and the Risk of Abuse

Image Credit: Steve Juvetson (Flickr)

Watermarking ChatGPT and Other Generative AIs Could Help Protect Against Fraud and Misinformation

Shortly after rumors leaked of former President Donald Trump’s impending indictment, images purporting to show his arrest appeared online. These images looked like news photos, but they were fake. They were created by a generative artificial intelligence system.

Generative AI, in the form of image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA, has exploded in the public sphere. By combining clever machine-learning algorithms with billions of pieces of human-generated content, these systems can do anything from create an eerily realistic image from a caption, synthesize a speech in President Joe Biden’s voice, replace one person’s likeness with another in a video, or write a coherent 800-word op-ed from a title prompt.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Hany Farid, Professor of Computer Science, University of California, Berkeley.

Even in these early days, generative AI is capable of creating highly realistic content. My colleague Sophie Nightingale and I found that the average person is unable to reliably distinguish an image of a real person from an AI-generated person. Although audio and video have not yet fully passed through the uncanny valley – images or models of people that are unsettling because they are close to but not quite realistic – they are likely to soon. When this happens, and it is all but guaranteed to, it will become increasingly easier to distort reality.

In this new world, it will be a snap to generate a video of a CEO saying her company’s profits are down 20%, which could lead to billions in market-share loss, or to generate a video of a world leader threatening military action, which could trigger a geopolitical crisis, or to insert the likeness of anyone into a sexually explicit video.

Advances in generative AI will soon mean that fake but visually convincing content will proliferate online, leading to an even messier information ecosystem. A secondary consequence is that detractors will be able to easily dismiss as fake actual video evidence of everything from police violence and human rights violations to a world leader burning top-secret documents.

As society stares down the barrel of what is almost certainly just the beginning of these advances in generative AI, there are reasonable and technologically feasible interventions that can be used to help mitigate these abuses. As a computer scientist who specializes in image forensics, I believe that a key method is watermarking.

Watermarks

There is a long history of marking documents and other items to prove their authenticity, indicate ownership and counter counterfeiting. Today, Getty Images, a massive image archive, adds a visible watermark to all digital images in their catalog. This allows customers to freely browse images while protecting Getty’s assets.

Imperceptible digital watermarks are also used for digital rights management. A watermark can be added to a digital image by, for example, tweaking every 10th image pixel so that its color (typically a number in the range 0 to 255) is even-valued. Because this pixel tweaking is so minor, the watermark is imperceptible. And, because this periodic pattern is unlikely to occur naturally, and can easily be verified, it can be used to verify an image’s provenance.

Even medium-resolution images contain millions of pixels, which means that additional information can be embedded into the watermark, including a unique identifier that encodes the generating software and a unique user ID. This same type of imperceptible watermark can be applied to audio and video.

The ideal watermark is one that is imperceptible and also resilient to simple manipulations like cropping, resizing, color adjustment and converting digital formats. Although the pixel color watermark example is not resilient because the color values can be changed, many watermarking strategies have been proposed that are robust – though not impervious – to attempts to remove them.

Watermarking and AI

These watermarks can be baked into the generative AI systems by watermarking all the training data, after which the generated content will contain the same watermark. This baked-in watermark is attractive because it means that generative AI tools can be open-sourced – as the image generator Stable Diffusion is – without concerns that a watermarking process could be removed from the image generator’s software. Stable Diffusion has a watermarking function, but because it’s open source, anyone can simply remove that part of the code.

OpenAI is experimenting with a system to watermark ChatGPT’s creations. Characters in a paragraph cannot, of course, be tweaked like a pixel value, so text watermarking takes on a different form.

Text-based generative AI is based on producing the next most-reasonable word in a sentence. For example, starting with the sentence fragment “an AI system can…,” ChatGPT will predict that the next word should be “learn,” “predict” or “understand.” Associated with each of these words is a probability corresponding to the likelihood of each word appearing next in the sentence. ChatGPT learned these probabilities from the large body of text it was trained on.

Generated text can be watermarked by secretly tagging a subset of words and then biasing the selection of a word to be a synonymous tagged word. For example, the tagged word “comprehend” can be used instead of “understand.” By periodically biasing word selection in this way, a body of text is watermarked based on a particular distribution of tagged words. This approach won’t work for short tweets but is generally effective with text of 800 or more words depending on the specific watermark details.

Generative AI systems can, and I believe should, watermark all their content, allowing for easier downstream identification and, if necessary, intervention. If the industry won’t do this voluntarily, lawmakers could pass regulation to enforce this rule. Unscrupulous people will, of course, not comply with these standards. But, if the major online gatekeepers – Apple and Google app stores, Amazon, Google, Microsoft cloud services and GitHub – enforce these rules by banning noncompliant software, the harm will be significantly reduced.

Signing Authentic Content

Tackling the problem from the other end, a similar approach could be adopted to authenticate original audiovisual recordings at the point of capture. A specialized camera app could cryptographically sign the recorded content as it’s recorded. There is no way to tamper with this signature without leaving evidence of the attempt. The signature is then stored on a centralized list of trusted signatures.

Although not applicable to text, audiovisual content can then be verified as human-generated. The Coalition for Content Provenance and Authentication (C2PA), a collaborative effort to create a standard for authenticating media, recently released an open specification to support this approach. With major institutions including Adobe, Microsoft, Intel, BBC and many others joining this effort, the C2PA is well positioned to produce effective and widely deployed authentication technology.

The combined signing and watermarking of human-generated and AI-generated content will not prevent all forms of abuse, but it will provide some measure of protection. Any safeguards will have to be continually adapted and refined as adversaries find novel ways to weaponize the latest technologies.

In the same way that society has been fighting a decadeslong battle against other cyber threats like spam, malware and phishing, we should prepare ourselves for an equally protracted battle to defend against various forms of abuse perpetrated using generative AI.

What AI Will do to Job Availability

Image Credit: Mises

The Fear of Mass Unemployment Due to Artificial Intelligence and Robotics Is Unfounded

People are arguing over whether artificial intelligence (AI) and robotics will eliminate human employment. People seem to have an all-or-nothing belief that either the use of technology in the workplace will destroy human employment and purpose or it won’t affect it at all. The replacement of human jobs with robotics and AI is known as “technological unemployment.”

Although robotics can turn materials into economic goods in a fraction of the time it would take a human, in some cases using minimal human energy, some claim that AI and robotics will actually bring about increasing human employment. According to a 2020 Forbes projection, AI and robotics will be a strong creator of jobs and work for people across the globe in the near future. However, also in 2020, Daron Acemoglu and Pascual Restrepo published a study that projected negative job growth when AI and robotics replace human jobs, predicting significant job loss each time a robot replaces a human in the workplace. But two years later, an article in The Economist showed that many economists have backtracked on their projection of a high unemployment rate due to AI and robotics in the workplace. According to the 2022 Economist article, “Fears of a prolonged period of high unemployment did not come to pass. . . . The gloomy narrative, which says that an invasion of job-killing robots is just around the corner, has for decades had an extraordinary hold on the popular imagination.” So which scenario is correct?

Contrary to popular belief, no industrialized nation has ever completely replaced human energy with technology in the workplace. For instance, the steam shovel never put construction workers out of work; whether people want to work in construction is a different question. And bicycles did not become obsolete because of vehicle manufacturing: “Consumer spending on bicycles and accessories peaked at $8.3 billion in 2021,” according to an article from the World Economic Forum.

Do people generally think AI and robotics can run an economy without human involvement, energy, ingenuity, and cooperation? While AI and robotics have boosted economies, they cannot plan or run an economy or create technological unemployment worldwide. “Some countries are in better shape to join the AI competition than others,” according to the Carnegie Endowment for International Peace. Although an accurate statement, it misses the fact that productive economies adapt to technological changes better than nonproductive economies. Put another way, productive people are even more effective when they use technology. Firms using AI and robotics can lower production costs, lower prices, and stimulate demand; hence, employment grows if demand and therefore production increase. In the unlikely event that AI or robotic productive technology does not lower a firm’s prices and production costs, employment opportunities will decline in that industry, but employment will shift elsewhere, potentially expanding another industry’s capacity. This industry may then increase its use of AI and robotics, creating more employment opportunities there.

In the not-so-distant past, office administrators did not know how to use computers, but when the computer entered the workplace, it did not eliminate administrative employment as was initially predicted. Now here we are, walking around with minicomputers in our pants pockets. The introduction of the desktop computer did not eliminate human administrative workers—on the contrary, the computer has provided more employment since its introduction in the workplace. Employees and business owners, sometimes separated by time and space, use all sorts of technological devices, communicate with one another across vast networks, and can be increasingly productive.

I remember attending a retirement party held by a company where I worked decades ago. The retiring employee told us all a story about when the company brought in its first computer back in the late ’60s. The retiree recalled, “The boss said we were going to use computers instead of typewriters and paper to handle administrative tasks. The next day, her department went from a staff of thirty to a staff of five.” The day after the department installed computers, twenty-five people left the company to seek jobs elsewhere so they would not “have to learn and deal with them darn computers.”

People often become afraid of losing their jobs when firms introduce new technology, particularly technology that is able to replicate human tasks. However, mass unemployment due to technological innovation has never happened in any industrialized nation. The notion that AI will disemploy humans in the marketplace is unfounded. Mike Thomas noted in his article “Robots and AI Taking Over Jobs: What to Know about the Future of Jobs” that “artificial intelligence is poised to eliminate millions of current jobs—and create millions of new ones.” The social angst about the future of AI and robotics is reminiscent of the early nineteenth-century Luddites of England and their fear of replacement technology. Luddites, heavily employed in the textile industry, feared the weaving machine would take their jobs. They traveled throughout England breaking and vandalizing machines and new manufacturing technology because of their fear of technological unemployment. However, as the textile industry there became capitalized, employment in that industry actually grew. History tells us that technology drives the increase of work and jobs for humans, not the opposite.

We should look forward to unskilled and semiskilled workers’ upgrading from monotonous work because of AI and robotics. Of course, AI and robotics will have varying effects on different sectors; but as a whole, they are enablers and amplifiers of human work. As noted, the steam shovel did not disemploy construction workers. The taxi industry was not eliminated because of Uber’s technology; if anything, Uber’s new AI technology lowered the barriers of entry to the taxi industry. Musicians were not eliminated when music was digitized; instead, this innovation gave musicians larger platforms and audiences, allowing them to reach millions of people with the swipe of a screen. And dating apps running on AI have helped millions of people fall in love and live happily ever after.

About the Author

Raushan Gross is an Associate Professor of Business Management at Pfeiffer University. His works include Basic EntrepreneurshipManagement and Strategy, and the e-book The Inspiring Life and Beneficial Impact of Entrepreneurs.

AI Design Simplifies Complicated Structural Engineering

Image Credit: Autodesk

Integrating Humans with AI in Structural Design

David L. Chandler | MIT News Office

Modern fabrication tools such as 3D printers can make structural materials in shapes that would have been difficult or impossible using conventional tools. Meanwhile, new generative design systems can take great advantage of this flexibility to create innovative designs for parts of a new building, car, or virtually any other device.

But such “black box” automated systems often fall short of producing designs that are fully optimized for their purpose, such as providing the greatest strength in proportion to weight or minimizing the amount of material needed to support a given load. Fully manual design, on the other hand, is time-consuming and labor-intensive.

Now, researchers (MIT) have found a way to achieve some of the best of both of these approaches. They used an automated design system but stopped the process periodically to allow human engineers to evaluate the work in progress and make tweaks or adjustments before letting the computer resume its design process. Introducing a few of these iterations produced results that performed better than those designed by the automated system alone, and the process was completed more quickly compared to the fully manual approach.

The results are reported this week in the journal Structural and Multidisciplinary Optimization, in a paper by MIT doctoral student Dat Ha and assistant professor of civil and environmental engineering Josephine Carstensen.

The basic approach can be applied to a broad range of scales and applications, Carstensen explains, for the design of everything from biomedical devices to nanoscale materials to structural support members of a skyscraper. Already, automated design systems have found many applications. “If we can make things in a better way, if we can make whatever we want, why not make it better?” she asks.

“It’s a way to take advantage of how we can make things in much more complex ways than we could in the past,” says Ha, adding that automated design systems have already begun to be widely used over the last decade in automotive and aerospace industries, where reducing weight while maintaining structural strength is a key need.

“You can take a lot of weight out of components, and in these two industries, everything is driven by weight,” he says. In some cases, such as internal components that aren’t visible, appearance is irrelevant, but for other structures, aesthetics may be important as well. The new system makes it possible to optimize designs for visual as well as mechanical properties, and in such decisions, the human touch is essential.

As a demonstration of their process in action, the researchers designed a number of structural load-bearing beams, such as might be used in a building or a bridge. In their iterations, they saw that the design has an area that could fail prematurely, so they selected that feature and required the program to address it. The computer system then revised the design accordingly, removing the highlighted strut and strengthening some other struts to compensate, and leading to an improved final design.

The process, which they call Human-Informed Topology Optimization, begins by setting out the needed specifications — for example, a beam needs to be this length, supported on two points at its ends, and must support this much of a load. “As we’re seeing the structure evolve on the computer screen in response to initial specification,” Carstensen says, “we interrupt the design and ask the user to judge it. The user can select, say, ‘I’m not a fan of this region, I’d like you to beef up or beef down this feature size requirement.’ And then the algorithm takes into account the user input.”

While the result is not as ideal as what might be produced by a fully rigorous yet significantly slower design algorithm that considers the underlying physics, she says it can be much better than a result generated by a rapid automated design system alone. “You don’t get something that’s quite as good, but that was not necessarily the goal. What we can show is that instead of using several hours to get something, we can use 10 minutes and get something much better than where we started off.”

The system can be used to optimize a design based on any desired properties, not just strength and weight. For example, it can be used to minimize fracture or buckling, or to reduce stresses in the material by softening corners.

Carstensen says, “We’re not looking to replace the seven-hour solution. If you have all the time and all the resources in the world, obviously you can run these and it’s going to give you the best solution.” But for many situations, such as designing replacement parts for equipment in a war zone or a disaster-relief area with limited computational power available, “then this kind of solution that catered directly to your needs would prevail.”

Similarly, for smaller companies manufacturing equipment in essentially “mom and pop” businesses, such a simplified system might be just the ticket. The new system they developed is not only simple and efficient to run on smaller computers, but it also requires far less training to produce useful results, Carstensen says. A basic two-dimensional version of the software, suitable for designing basic beams and structural parts, is freely available now online, she says, as the team continues to develop a full 3D version.

“The potential applications of Prof Carstensen’s research and tools are quite extraordinary,” says Christian Málaga-Chuquitaype, a professor of civil and environmental engineering at Imperial College London, who was not associated with this work. “With this work, her group is paving the way toward a truly synergistic human-machine design interaction.”

“By integrating engineering ‘intuition’ (or engineering ‘judgement’) into a rigorous yet computationally efficient topology optimization process, the human engineer is offered the possibility of guiding the creation of optimal structural configurations in a way that was not available to us before,” he adds. “Her findings have the potential to change the way engineers tackle ‘day-to-day’ design tasks.”

Reprinted with permission from MIT News ( http://news.mit.edu/ )

AI and the U.S. Military’s Unmanned Technological Edge

Image: Marine Corps Warfighting Laboratory MAGTAF Integrated Experiment (MCWL) 160709-M-OB268-165.jpg

War in Ukraine Accelerates Global Drive Toward Killer Robots

The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a Department of Defense directive. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related implementation plan released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.”

Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in Ukraine and Nagorno-Karabakh: Weaponized artificial intelligence is the future of warfare.

“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of Article 36, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said.

Pressure of War

But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.

This month, a key Russian manufacturer announced plans to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to defend Ukrainian energy facilities from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous Switchblade drone, said the technology is already within reach to convert these weapons to become fully autonomous.

Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “logical and inevitable next step” and recently said that soldiers might see them on the battlefield in the next six months.

Proponents of fully autonomous weapons systems argue that the technology will keep soldiers out of harm’s way by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities.

Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, James Dawes, Professor, Macalester College.

By contrast, fully autonomous drones, like the so-called “drone hunters” now deployed in Ukraine, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems.

Calling for a Timeout

Critics like The Campaign to Stop Killer Robots have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of digital dehumanization.

Together with Human Rights Watch, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield.

This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings. U.S. Army AMRDEC Public Affairs

The organizations argue that the militaries investing most heavily in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the hands of terrorists and others outside of government control.

The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “appropriate levels of human judgment over the use of force.” Human Rights Watch issued a statement saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.

But as Gregory Allen, an expert from the national defense and international relations think tank Center for Strategic and International Studies, argues, this language establishes a lower threshold than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.”

The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy.

The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.

Impossible Balance?

The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen.

The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “cannot be transferred to a machine, algorithm or weapon system.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.

If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.