What AI Will do to Job Availability

Image Credit: Mises

The Fear of Mass Unemployment Due to Artificial Intelligence and Robotics Is Unfounded

People are arguing over whether artificial intelligence (AI) and robotics will eliminate human employment. People seem to have an all-or-nothing belief that either the use of technology in the workplace will destroy human employment and purpose or it won’t affect it at all. The replacement of human jobs with robotics and AI is known as “technological unemployment.”

Although robotics can turn materials into economic goods in a fraction of the time it would take a human, in some cases using minimal human energy, some claim that AI and robotics will actually bring about increasing human employment. According to a 2020 Forbes projection, AI and robotics will be a strong creator of jobs and work for people across the globe in the near future. However, also in 2020, Daron Acemoglu and Pascual Restrepo published a study that projected negative job growth when AI and robotics replace human jobs, predicting significant job loss each time a robot replaces a human in the workplace. But two years later, an article in The Economist showed that many economists have backtracked on their projection of a high unemployment rate due to AI and robotics in the workplace. According to the 2022 Economist article, “Fears of a prolonged period of high unemployment did not come to pass. . . . The gloomy narrative, which says that an invasion of job-killing robots is just around the corner, has for decades had an extraordinary hold on the popular imagination.” So which scenario is correct?

Contrary to popular belief, no industrialized nation has ever completely replaced human energy with technology in the workplace. For instance, the steam shovel never put construction workers out of work; whether people want to work in construction is a different question. And bicycles did not become obsolete because of vehicle manufacturing: “Consumer spending on bicycles and accessories peaked at $8.3 billion in 2021,” according to an article from the World Economic Forum.

Do people generally think AI and robotics can run an economy without human involvement, energy, ingenuity, and cooperation? While AI and robotics have boosted economies, they cannot plan or run an economy or create technological unemployment worldwide. “Some countries are in better shape to join the AI competition than others,” according to the Carnegie Endowment for International Peace. Although an accurate statement, it misses the fact that productive economies adapt to technological changes better than nonproductive economies. Put another way, productive people are even more effective when they use technology. Firms using AI and robotics can lower production costs, lower prices, and stimulate demand; hence, employment grows if demand and therefore production increase. In the unlikely event that AI or robotic productive technology does not lower a firm’s prices and production costs, employment opportunities will decline in that industry, but employment will shift elsewhere, potentially expanding another industry’s capacity. This industry may then increase its use of AI and robotics, creating more employment opportunities there.

In the not-so-distant past, office administrators did not know how to use computers, but when the computer entered the workplace, it did not eliminate administrative employment as was initially predicted. Now here we are, walking around with minicomputers in our pants pockets. The introduction of the desktop computer did not eliminate human administrative workers—on the contrary, the computer has provided more employment since its introduction in the workplace. Employees and business owners, sometimes separated by time and space, use all sorts of technological devices, communicate with one another across vast networks, and can be increasingly productive.

I remember attending a retirement party held by a company where I worked decades ago. The retiring employee told us all a story about when the company brought in its first computer back in the late ’60s. The retiree recalled, “The boss said we were going to use computers instead of typewriters and paper to handle administrative tasks. The next day, her department went from a staff of thirty to a staff of five.” The day after the department installed computers, twenty-five people left the company to seek jobs elsewhere so they would not “have to learn and deal with them darn computers.”

People often become afraid of losing their jobs when firms introduce new technology, particularly technology that is able to replicate human tasks. However, mass unemployment due to technological innovation has never happened in any industrialized nation. The notion that AI will disemploy humans in the marketplace is unfounded. Mike Thomas noted in his article “Robots and AI Taking Over Jobs: What to Know about the Future of Jobs” that “artificial intelligence is poised to eliminate millions of current jobs—and create millions of new ones.” The social angst about the future of AI and robotics is reminiscent of the early nineteenth-century Luddites of England and their fear of replacement technology. Luddites, heavily employed in the textile industry, feared the weaving machine would take their jobs. They traveled throughout England breaking and vandalizing machines and new manufacturing technology because of their fear of technological unemployment. However, as the textile industry there became capitalized, employment in that industry actually grew. History tells us that technology drives the increase of work and jobs for humans, not the opposite.

We should look forward to unskilled and semiskilled workers’ upgrading from monotonous work because of AI and robotics. Of course, AI and robotics will have varying effects on different sectors; but as a whole, they are enablers and amplifiers of human work. As noted, the steam shovel did not disemploy construction workers. The taxi industry was not eliminated because of Uber’s technology; if anything, Uber’s new AI technology lowered the barriers of entry to the taxi industry. Musicians were not eliminated when music was digitized; instead, this innovation gave musicians larger platforms and audiences, allowing them to reach millions of people with the swipe of a screen. And dating apps running on AI have helped millions of people fall in love and live happily ever after.

About the Author

Raushan Gross is an Associate Professor of Business Management at Pfeiffer University. His works include Basic EntrepreneurshipManagement and Strategy, and the e-book The Inspiring Life and Beneficial Impact of Entrepreneurs.

AI Design Simplifies Complicated Structural Engineering

Image Credit: Autodesk

Integrating Humans with AI in Structural Design

David L. Chandler | MIT News Office

Modern fabrication tools such as 3D printers can make structural materials in shapes that would have been difficult or impossible using conventional tools. Meanwhile, new generative design systems can take great advantage of this flexibility to create innovative designs for parts of a new building, car, or virtually any other device.

But such “black box” automated systems often fall short of producing designs that are fully optimized for their purpose, such as providing the greatest strength in proportion to weight or minimizing the amount of material needed to support a given load. Fully manual design, on the other hand, is time-consuming and labor-intensive.

Now, researchers (MIT) have found a way to achieve some of the best of both of these approaches. They used an automated design system but stopped the process periodically to allow human engineers to evaluate the work in progress and make tweaks or adjustments before letting the computer resume its design process. Introducing a few of these iterations produced results that performed better than those designed by the automated system alone, and the process was completed more quickly compared to the fully manual approach.

The results are reported this week in the journal Structural and Multidisciplinary Optimization, in a paper by MIT doctoral student Dat Ha and assistant professor of civil and environmental engineering Josephine Carstensen.

The basic approach can be applied to a broad range of scales and applications, Carstensen explains, for the design of everything from biomedical devices to nanoscale materials to structural support members of a skyscraper. Already, automated design systems have found many applications. “If we can make things in a better way, if we can make whatever we want, why not make it better?” she asks.

“It’s a way to take advantage of how we can make things in much more complex ways than we could in the past,” says Ha, adding that automated design systems have already begun to be widely used over the last decade in automotive and aerospace industries, where reducing weight while maintaining structural strength is a key need.

“You can take a lot of weight out of components, and in these two industries, everything is driven by weight,” he says. In some cases, such as internal components that aren’t visible, appearance is irrelevant, but for other structures, aesthetics may be important as well. The new system makes it possible to optimize designs for visual as well as mechanical properties, and in such decisions, the human touch is essential.

As a demonstration of their process in action, the researchers designed a number of structural load-bearing beams, such as might be used in a building or a bridge. In their iterations, they saw that the design has an area that could fail prematurely, so they selected that feature and required the program to address it. The computer system then revised the design accordingly, removing the highlighted strut and strengthening some other struts to compensate, and leading to an improved final design.

The process, which they call Human-Informed Topology Optimization, begins by setting out the needed specifications — for example, a beam needs to be this length, supported on two points at its ends, and must support this much of a load. “As we’re seeing the structure evolve on the computer screen in response to initial specification,” Carstensen says, “we interrupt the design and ask the user to judge it. The user can select, say, ‘I’m not a fan of this region, I’d like you to beef up or beef down this feature size requirement.’ And then the algorithm takes into account the user input.”

While the result is not as ideal as what might be produced by a fully rigorous yet significantly slower design algorithm that considers the underlying physics, she says it can be much better than a result generated by a rapid automated design system alone. “You don’t get something that’s quite as good, but that was not necessarily the goal. What we can show is that instead of using several hours to get something, we can use 10 minutes and get something much better than where we started off.”

The system can be used to optimize a design based on any desired properties, not just strength and weight. For example, it can be used to minimize fracture or buckling, or to reduce stresses in the material by softening corners.

Carstensen says, “We’re not looking to replace the seven-hour solution. If you have all the time and all the resources in the world, obviously you can run these and it’s going to give you the best solution.” But for many situations, such as designing replacement parts for equipment in a war zone or a disaster-relief area with limited computational power available, “then this kind of solution that catered directly to your needs would prevail.”

Similarly, for smaller companies manufacturing equipment in essentially “mom and pop” businesses, such a simplified system might be just the ticket. The new system they developed is not only simple and efficient to run on smaller computers, but it also requires far less training to produce useful results, Carstensen says. A basic two-dimensional version of the software, suitable for designing basic beams and structural parts, is freely available now online, she says, as the team continues to develop a full 3D version.

“The potential applications of Prof Carstensen’s research and tools are quite extraordinary,” says Christian Málaga-Chuquitaype, a professor of civil and environmental engineering at Imperial College London, who was not associated with this work. “With this work, her group is paving the way toward a truly synergistic human-machine design interaction.”

“By integrating engineering ‘intuition’ (or engineering ‘judgement’) into a rigorous yet computationally efficient topology optimization process, the human engineer is offered the possibility of guiding the creation of optimal structural configurations in a way that was not available to us before,” he adds. “Her findings have the potential to change the way engineers tackle ‘day-to-day’ design tasks.”

Reprinted with permission from MIT News ( http://news.mit.edu/ )

AI and the U.S. Military’s Unmanned Technological Edge

Image: Marine Corps Warfighting Laboratory MAGTAF Integrated Experiment (MCWL) 160709-M-OB268-165.jpg

War in Ukraine Accelerates Global Drive Toward Killer Robots

The U.S. military is intensifying its commitment to the development and use of autonomous weapons, as confirmed by an update to a Department of Defense directive. The update, released Jan. 25, 2023, is the first in a decade to focus on artificial intelligence autonomous weapons. It follows a related implementation plan released by NATO on Oct. 13, 2022, that is aimed at preserving the alliance’s “technological edge” in what are sometimes called “killer robots.”

Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in Ukraine and Nagorno-Karabakh: Weaponized artificial intelligence is the future of warfare.

“We know that commanders are seeing a military value in loitering munitions in Ukraine,” Richard Moyes, director of Article 36, a humanitarian organization focused on reducing harm from weapons, told me in an interview. These weapons, which are a cross between a bomb and a drone, can hover for extended periods while waiting for a target. For now, such semi-autonomous missiles are generally being operated with significant human control over key decisions, he said.

Pressure of War

But as casualties mount in Ukraine, so does the pressure to achieve decisive battlefield advantages with fully autonomous weapons – robots that can choose, hunt down and attack their targets all on their own, without needing any human supervision.

This month, a key Russian manufacturer announced plans to develop a new combat version of its Marker reconnaissance robot, an uncrewed ground vehicle, to augment existing forces in Ukraine. Fully autonomous drones are already being used to defend Ukrainian energy facilities from other drones. Wahid Nawabi, CEO of the U.S. defense contractor that manufactures the semi-autonomous Switchblade drone, said the technology is already within reach to convert these weapons to become fully autonomous.

Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the war’s “logical and inevitable next step” and recently said that soldiers might see them on the battlefield in the next six months.

Proponents of fully autonomous weapons systems argue that the technology will keep soldiers out of harm’s way by keeping them off the battlefield. They will also allow for military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities.

Currently, semi-autonomous weapons, like loitering munitions that track and detonate themselves on targets, require a “human in the loop.” They can recommend actions but require their operators to initiate them.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, James Dawes, Professor, Macalester College.

By contrast, fully autonomous drones, like the so-called “drone hunters” now deployed in Ukraine, can track and disable incoming unmanned aerial vehicles day and night, with no need for operator intervention and faster than human-controlled weapons systems.

Calling for a Timeout

Critics like The Campaign to Stop Killer Robots have been advocating for more than a decade to ban research and development of autonomous weapons systems. They point to a future where autonomous weapons systems are designed specifically to target humans, not just vehicles, infrastructure and other weapons. They argue that wartime decisions over life and death must remain in human hands. Turning them over to an algorithm amounts to the ultimate form of digital dehumanization.

Together with Human Rights Watch, The Campaign to Stop Killer Robots argues that autonomous weapons systems lack the human judgment necessary to distinguish between civilians and legitimate military targets. They also lower the threshold to war by reducing the perceived risks, and they erode meaningful human control over what happens on the battlefield.

This composite image shows a ‘Switchblade’ loitering munition drone launching from a tube and extending its folded wings. U.S. Army AMRDEC Public Affairs

The organizations argue that the militaries investing most heavily in autonomous weapons systems, including the U.S., Russia, China, South Korea and the European Union, are launching the world into a costly and destabilizing new arms race. One consequence could be this dangerous new technology falling into the hands of terrorists and others outside of government control.

The updated Department of Defense directive tries to address some of the key concerns. It declares that the U.S. will use autonomous weapons systems with “appropriate levels of human judgment over the use of force.” Human Rights Watch issued a statement saying that the new directive fails to make clear what the phrase “appropriate level” means and doesn’t establish guidelines for who should determine it.

But as Gregory Allen, an expert from the national defense and international relations think tank Center for Strategic and International Studies, argues, this language establishes a lower threshold than the “meaningful human control” demanded by critics. The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human control considered appropriate “may be little to none.”

The updated directive also includes language promising ethical use of autonomous weapons systems, specifically by establishing a system of oversight for developing and employing the technology, and by insisting that the weapons will be used in accordance with existing international laws of war. But Article 36’s Moyes noted that international law currently does not provide an adequate framework for understanding, much less regulating, the concept of weapon autonomy.

The current legal framework does not make it clear, for instance, that commanders are responsible for understanding what will trigger the systems that they use, or that they must limit the area and time over which those systems will operate. “The danger is that there is not a bright line between where we are now and where we have accepted the unacceptable,” said Moyes.

Impossible Balance?

The Pentagon’s update demonstrates a simultaneous commitment to deploying autonomous weapons systems and to complying with international humanitarian law. How the U.S. will balance these commitments, and if such a balance is even possible, remains to be seen.

The International Committee of the Red Cross, the custodian of international humanitarian law, insists that the legal obligations of commanders and operators “cannot be transferred to a machine, algorithm or weapon system.” Right now, human beings are held responsible for protecting civilians and limiting combat damage by making sure the use of force is proportional to military objectives.

If and when artificially intelligent weapons are deployed on the battlefield, who should be held responsible when needless civilian deaths occur? There isn’t a clear answer to that very important question.

Will AI Allow for Better Service, Higher Profit?

Image Credit: KAZ Vorpal (Flickr)

Will AI Learn to Become a Better Entrepreneur than You?

Contemporary businesses use artificial intelligence (AI) tools to assist with operations and compete in the marketplace. AI enables firms and entrepreneurs to make data-driven decisions and to quicken the data-gathering process. When creating strategy, buying, selling, and increasing marketplace discovery, firms need to ask: What is better, artificial or human intelligence?

A recent article from the Harvard Business Review, “Can AI Help You Sell?,” stated, “Better algorithms lead to better service and greater success.” The attributes of the successful entrepreneur, such as calculated risk taking, dealing with uncertainty, keen sense for market signals, and adjusting to marketplace changes might be a thing of the past. Can AI take the place of the human entrepreneur? Would sophisticated artificial intelligence be able to spot market prices better, adjust to expectations better, and steer production toward the needs of consumers better than a human?

In one of my classes this semester, students and I discussed the role of AI, deep machine learning, and natural language processing (NLP) in driving many of the decisions and operations a human would otherwise provide within the firm. Of course, half of the class felt that the integration of some level of AI into many firms’ operations and resource management is beneficial in creating a competitive advantage.

However, the other half felt using AI will inevitably disable humans’ function in the market economy, resulting in less and less individualism. In other words, the firm will be overrun by AI. We can see that even younger college students are on the fence about whether AI will eliminate humans’ function in the market economy. We concluded as a class that AI and machine learning have their promises and shortcomings.

After class, I started thinking about the digital world of entrepreneurship. E-commerce demands the use of AI to reach customers, sell goods, produce goods, and host exchange—in conjunction with a human entrepreneur, of course.

However, AI—machine learning or deep machine learning—could also be tasked with creating a business-based model, examining the data on customers’ needs, designing a web page, and creating ads. Could AI adjust to market action and react to market uncertainty like a human? The answer may be a resounding yes! So, could AI eliminate the human entrepreneur?

Algorithm-XLab explains deep machine learning as something that “allows computers to solve complex problems. These systems can even handle diverse masses of unstructured data set.” Algorithm-XLab compared deep learning with human learning favorably, stating, “While a human can easily lose concentration, and possibly make a mistake, a robot won’t.”

This statement by Algorithm-XLab challenges the idea that trial and error leads to greater market knowledge and better enables entrepreneurs to provide consumers with what they are willing to buy. The statement also portrays the marketplace as a process where people have perfect knowledge and an equilibrium point, and it implies that humans do not have specialized knowledge of time and place.

The use of AI and its tools of deep learning and language processing do have their benefits from a technical standpoint. AI can determine how to produce hula hoops better, but can it determine whether to produce them or devote energy elsewhere? If entrepreneurs discover market opportunities, they must weigh the advantages and disadvantages of their potential actions. Will AI have the same entrepreneurial foresight?

The acquisition of market knowledge can take humans years to acquire; AI is much faster at it than humans would be. For example, the Allen Institute for AI is “working on systems that can take science tests, which require a knowledge of unstated facts and common sense that humans develop over the course of their lives.” The ability to process unstated, scattered facts is precisely the kind of characteristic we attribute to entrepreneurs. Processes, changes, and choices characterize the operation of the market, and the entrepreneur is at the center of this market function.

There is no doubt that contemporary firms use deep learning for strategy, operations, logistics, sales, and record keeping for human resources (HR) decision-making, according to a Bain & Company article titled “HR’s New Digital Mandate.” While focused on HR, the digital mandate does lend itself to questioning the use of entrepreneurial thinking and strategy conducted within a firm. After AI has learned how to operate a firm using robotic process automation and NLP capacities to their maximum, might it outstrip the human natural entrepreneurial abilities?

AI is used in everyday life, such as self-checkout at the grocery store, online shopping, social media interaction, dating apps, and virtual doctor appointments. Product delivery, financing, and development services increasingly involve an AI-as-a-service component. AI as a service minimizes the costs of gathering and processing customer insights, something usually associated with a team of human minds projecting key performance indicators aligned with an organizational strategy.

The human entrepreneur has a competitive advantage insofar as handling ambiguous customer feedback and in effect creating an entrepreneurial response and delivering satisfaction. We seek to determine whether AI has replaced human energy in some areas of life. Can AI understand human uneasiness or dissatisfaction, or the subjectivity of value felt by the consumer? AI can produce hula hoops, but can it articulate plans and gather the resources needed to produce them in the first place?.

In what, if any, entrepreneurial functions can AI outperform the human entrepreneur? The human entrepreneur is willing to take risks, adjust to the needs of consumers, pick up price signals, and understand customer choices. Could the human entrepreneur soon become an extinct class? If so, would machine learning and natural processing AI understand the differences between free and highly regulated markets? If so, which would it prefer, or which would it create?

When OpenAI Unleashed ChatGPT, Productivity, and Efficiency Took a Giant Leap Forward

Image Credit: Alan Levine (Flickr)

Getting to Know ChatGPT: A Shallow Dip into Its Current Functionality and Future Potential

ChatGPT and the underlying AI technology are, as near as anyone can tell, the next-level toolkit for research, communication, idea generation, and a million other uses. If you haven’t introduced yourself to it yet, leave plenty of time –  getting started is easy, but getting yourself to stop may not be. Those of us that were around pre-internet may liken the first-time experience to the first time we gained access to the world wide web. The Sudden, much easier access to a world of information, puts one in a place where you don’t know what to try first.  

Over the past couple of months, the now-familiar name, ChatGPT has made headlines. It quickly captivated the worlds imagination as an AI product that is likely to disrupt or destroy some industries while enhancing others. Obviously, this is of particular interest to investors. With this in mind, I sat down for an interview with ChatGPT; below are answers to questions that many investors are asking.

Channelchek (PH) – Am I able to interview Chatgpt?

ChatGPT – Hello! As an AI language model, I’m happy to answer any questions you may have. However, it’s important to keep in mind that I don’t have personal experiences or opinions like a human, and I can only provide information based on my training data. So feel free to ask me anything!

Channelchek (PH) – What do most users have ChatGPT to do for them?

ChatGPT – As an AI language model, ChatGPT can be used for a wide range of tasks, including:

  • Language translation: ChatGPT can be used to translate text from one language to another.
  • Content creation: ChatGPT can generate text for various types of content, such as articles, blog posts, and product descriptions.
  • Chatbot development: ChatGPT can be used to develop chatbots that can interact with users and provide information.
  • Sentiment analysis: ChatGPT can be used to analyze the sentiment of a piece of text, such as a social media post or a product review.
  • Question-answering: ChatGPT can be used to answer questions on a wide range of topics.
  • Text summarization: ChatGPT can be used to summarize long pieces of text into shorter, more manageable summaries.
  • Language modeling: ChatGPT can be used to train new models for specific domains, such as medical or legal language.

These are just a few examples of what ChatGPT can be used for. Its versatility and flexibility make it a powerful tool for many different applications.

Channelchek (PH) – From where do you pull your answers ChatGPT?

ChatGPT – As an AI language model, I don’t have pre-programmed answers. Instead, I generate responses to questions by analyzing patterns in large amounts of text data that I was trained on. Specifically, I was trained on a dataset of billions of words from various sources, including books, articles, websites, and other textual sources. During my training, I was exposed to a wide range of topics and writing styles, which helps me generate responses to many different types of questions.

When you ask me a question, I use natural language processing (NLP) techniques to analyze the question and generate a response based on patterns in the text data that I was trained on. I then use statistical models to determine the most likely response based on the context of the question and the patterns in the training data.

It’s important to note that my responses are only as accurate and reliable as the patterns that I learned during my training. While I strive to provide accurate and helpful information, I may not always get it right, and my responses may not reflect the views or opinions of OpenAI.

Channelchek (PH) – Are you at risk of having the response you generate slanted or biased based on users feedback adjusting your training ChatGPT?

ChatGPT – As an AI language model, I am designed to learn from the data that I am trained on, including the feedback that users provide to adjust my training. However, it’s important to be aware that the feedback that users provide can sometimes introduce biases into my training.

If the feedback is consistently biased in a particular direction, this could affect the accuracy and fairness of my responses. This is why it’s important to carefully monitor and evaluate any feedback that is used to adjust my training, and to ensure that it is representative of a wide range of perspectives.

To address this issue, OpenAI has implemented a number of safeguards to prevent bias and ensure that my training data is as accurate and diverse as possible. For example, the training data is carefully curated to include a wide range of sources and perspectives, and the training process is carefully monitored to ensure that biases are not introduced.

Additionally, OpenAI has implemented a number of techniques to help identify and mitigate any biases that may be present in the training data or feedback. For example, I am periodically evaluated to ensure that my responses are accurate and unbiased, and any issues that are identified are addressed through additional training or other adjustments to the model.

Channelchek (PH) – As an editor I write for Channelchek about stocks, the financial markets, the economy and business climate. Are you a good resource for current events related to these topics ChatGPT?

ChatGPT – As an AI language model, I can certainly provide you with information on current events related to stocks, financial markets, the economy, and the business climate. I have been trained on a vast amount of text data, including news articles and other sources related to these topics, so I can provide you with up-to-date information and insights on current events.

When asking me questions related to these topics, it’s important to provide me with as much context and specificity as possible so that I can provide you with the most accurate and relevant information. For example, if you are looking for information on a particular company or sector, you could provide me with the name of the company or sector, as well as any other relevant details or questions you may have.

However, it’s important to note that my responses are generated based on patterns in the text data that I was trained on, and may not reflect the views or opinions of OpenAI. Additionally, while I can provide you with information on current events and trends, I cannot provide investment advice or make predictions about future market performance. It’s always important to carefully evaluate information and seek the advice of a qualified financial professional before making investment decisions.

Channelchek (PH) – Do users ask you to forecast the stock market direction?

ChatGPT – Yes, users sometimes ask me to forecast the direction of the stock market. However, as an AI language model, I cannot make accurate predictions about future market performance. While I can provide information on current market trends and historical data, predicting the future direction of the stock market is a highly complex task that involves a range of economic, political, and social factors.

It’s important to note that investing in the stock market involves a certain degree of risk, and investors should always carefully evaluate market conditions and seek the advice of a qualified financial professional before making investment decisions. While I can provide general information and insights related to stocks and financial markets, I cannot provide investment advice or predict the future performance of individual stocks or the overall market.

Channelchek (PH) – Are you familiar with Channelchek?

ChatGPT – While I am not specifically programmed to be familiar with individual websites, I may have come across content from Channelchek during my training.

That being said, I can still provide you with general information on what Channelchek is. From my understanding, Channelchek is a financial information platform that provides independent research, analysis, and market commentary on publicly traded companies. It offers a range of tools and resources for investors and financial professionals, including market data, stock quotes, and research reports.

Channelchek’s focus is on providing unbiased and independent information to help investors make informed investment decisions.

Channelchek (PH) – Does your AI technology pose a risk to traditional internet search engines?

ChatGPT – My purpose is not to compete with traditional search engines like Google or Bing, but rather to complement them by providing an additional source of information and assistance to users. While I am capable of providing answers to a wide range of questions and generating responses to open-ended queries, I am not designed to replace traditional search engines or web browsing.

Search engines like Google and Bing are still the primary means by which users navigate the internet and find information, and they use a range of sophisticated algorithms and machine learning techniques to index and rank content on the web. While I am capable of processing natural language queries and generating responses, my training data is based on pre-existing content, which means that my responses are generated based on patterns in the text data that I was trained on, rather than direct access to the vast amounts of data on the internet.

Early Boolean Phrase Search Tool

Overall, AI language models like me are part of a larger ecosystem of tools and technologies that are designed to help users access and navigate the vast amount of information available online. While there may be some overlap between the capabilities of traditional search engines and AI language models, both serve different purposes and can work together to provide users with the most comprehensive and accurate information possible.

Channelchek (PH) – What industries are you likely to disrupt and which are you likely to augment or help build?

ChatGPT – As an AI language model, my capabilities are quite broad, and I have the potential to impact a wide range of industries and areas of society. Here are a few ways in which I could potentially disrupt or augment various industries:

Industries that could be disrupted:

  • Customer service: AI language models like me are already being used by many companies to automate and improve customer service interactions, and this trend is likely to continue. As AI technology improves, it could potentially replace many human customer service jobs.
  • Content creation: AI language models have the potential to automate certain types of content creation, such as news articles, product descriptions, and even social media posts. This could have a significant impact on the journalism and writing industries.
  • Education: AI language models could potentially disrupt the traditional education model by providing personalized learning experiences and automating certain types of teaching and grading.

Industries that could be augmented:

  • Healthcare: AI language models could help augment healthcare by providing more accurate and personalized diagnoses, and by automating certain types of administrative tasks.
  • Finance: AI language models could help augment finance by providing more accurate and data-driven investment advice, and by automating certain types of financial analysis.
  • Research: AI language models could help augment research by processing and analyzing vast amounts of data, and by identifying patterns and insights that might be difficult for humans to spot on their own.

Overall, the impact of AI language models like me on various industries will depend on a wide range of factors, including the development of the technology, the regulatory environment, and societal attitudes toward automation and AI. While there may be some industries that are more susceptible to disruption or augmentation than others, it’s important to approach these developments with a nuanced understanding of the potential benefits and risks.

Channelchek (PH) – Are users using ChatGPT to create social media bots to respond without human intervention likely to flood the internet?

ChatGPT – While it is technically possible for users to use AI language models like ChatGPT to create social media bots that can respond to messages without human intervention, I cannot speculate on whether this is likely to flood the internet.

The use of AI language models like ChatGPT could potentially make it easier and more accessible for users to create more sophisticated chatbots that can mimic human conversation more convincingly.

It’s also important to consider the potential impact of widespread use of social media bots on the quality and authenticity of online interactions. While chatbots can be useful in certain contexts, they can also be used to spread misinformation and propaganda or to manipulate public opinion. As with any technology, it’s important to approach the use of AI language models and chatbots in a responsible and ethical way, and to consider the potential risks and benefits of their use.

Channelchek (PH) – What do you call one that uses ChatGPT?

ChatGPT – There is no specific term for a person who uses ChatGPT or other AI language models, as these tools are designed to be accessible and usable by anyone with an internet connection.

However, a person who uses ChatGPT to create chatbots or other automated tools could be referred to as an AI developer or AI engineer. Similarly, a person who uses ChatGPT to conduct research or analysis in a specific field could be referred to as an AI researcher or analyst.

Channelchek (PH) – How much do you cost?

As an AI language model, I do not have a specific cost as I am not a commercial product. My purpose is to provide information and assistance to users like you, and I am freely accessible through various platforms that integrate my technology, including chatbots, voice assistants, and search engines.

Channelchek (PH) – Should I throw out the encyclopedias on my book shelf and rely instead on ChatGPT?

ChatGPT – Ultimately, whether or not to keep your encyclopedias depends on your personal preferences and how you prefer to access information. While AI language models like ChatGPT are convenient and accessible, they may not always provide the level of detail or accuracy that you require for a specific question or topic. So, it’s always a good idea to consult multiple sources of information to ensure that you have a well-rounded understanding of a given topic.

Channelchek (PH) – How does one get started using ChatGPT for research?

ChatGPT – Getting started with using ChatGPT for research can involve several steps. Here are a few basic steps you can follow:

Identify your research question or topic: Start by identifying the research question or topic you want to investigate. This will help you to focus your search and make the best use of ChatGPT.

Choose a ChatGPT platform: There are several platforms that integrate ChatGPT, including chatbots, voice assistants, and search engines. Choose a platform that best fits your needs and preferences.

Familiarize yourself with the platform: Once you have chosen a platform, take some time to explore and familiarize yourself with its features and capabilities. Many platforms offer tutorials and guides to help you get started.

Start asking questions: Once you are comfortable with the platform, start asking questions related to your research topic. Try to ask open-ended questions that will allow ChatGPT to provide more detailed and relevant answers.

Evaluate the results: As you receive answers from ChatGPT, evaluate the accuracy and relevance of the information. Keep in mind that ChatGPT is not perfect and may not always provide the most accurate or comprehensive information. Be sure to double-check any information you receive with other sources to ensure its accuracy.

Refine your search: If you are not finding the information you need, try refining your search by using more specific keywords or phrases.

Overall, using ChatGPT for research can be a valuable tool for gathering information and conducting preliminary investigations. However, it’s important to keep in mind that it is just one tool in the research process, and should be used in conjunction with other sources of information and research methods.

Wrap Up

It’s easy to comprehend how this technology, which will likely see stronger competition in the coming year, will quickly become something that is used every day to help streamline how we do our jobs, and find information in our personal lives. Unlike an internet search engine, queries produce individual results tailored to the individual question. The same question will recieve different phraseology if asked a minute later. Whereas Google or DuckDuckGo list websites that may provide the answer, ChatGPT responds using its own answer using artificial intelligence.

Getting started is as easy as going to OpenAI.com and navigating to Chat.OpenAI.com and providing an email and verification phone number. Click on my name below and write me, I’d love to hear what you are using it for.

Paul Hoffman

Managing Editor, Channelchek

Source:

OpenAI. Retrieved February 15, 2022

AI a New Favorite Among Retail Investors

Image Credit: Focal Foto (Flickr)

Recent Investment Trends Include Small-Cap Artificial Intelligence Stocks

C3 AI, sometimes written C3.ai, is an artificial intelligence platform that provides services for companies to build large-scale AI applications. Its stock had the fifth highest traded shares among Fidelity’s retail investors on Monday (February 6). This included a record-breaking $31.4 million worth of shares traded among the broker’s individual self-directed traders. According to Reuters, “Retail investors are piling up on small-cap firms that employ artificial intelligence amid intensifying competition between tech titans.”  The article points to Google and Microsoft as examples of companies that expect AI to be the next meaningful driver of growth.

Investors, for their part, are looking to get ahead of any acquisition spree that deep-pocketed companies may embark on, which could include buying the advanced technology by acquiring small-cap tech firms.

Focus Heightened by ChatGPT

The spotlight ChatGPT finds itself in, three months after its launch, is indicicative of the interest in this technology amongst investors and users. With applications as numerous than one can think up, the technology could outdate many services provided by tech companies like Alphabet (GOOGL), or Microsoft (MSFT) – big tech has catching up to do. This seems to have created a race by cash rich companies to not be disrupted and left behind.

Investor’s recent focus on small companies in this space prefer those that are concentrated in AI technology. One main reason is that small-cap or microcap firms in this space are likely to have AI as a more concentrated part of their business. The bet being that whether the small company continues to grow independently, or is acquired by a larger firm looking to instantly be par with current technology, doesn’t much matter, it is a win for the investor if either occurs.

And it is a win, C3 AI stock rallied 46% last week, and climbed another 6.5% on Monday. It is now up 146% year to date.

Other Companies Involved

SoundHound AI, provides a voice AI platform services, and Thailand-based security firm Guardforce AI have more than doubled so far this year, while analytics firm BigBear.AI has increased ninefold.

US-listed shares of Baidu Inc climbed after the Chinese search engine indicated it would complete an internal test of a ChatGPT-style project called “Ernie Bot” next month.

Shares of Microsoft, which supports ChatGPT parent OpenAI, had been ratcheting up over the past month. The company is expected to make an announcement on their AI gained 1.5% in premarket trading ahead of the AI plans this week. ​

Google-owner Alphabet Inc said this week it would launch Bard, a chatbot service for developers, alongside its search engine.

Take Away

Change in technology that leads to improvements in daily lives has always been a focus of investors betting on which companies will outlast the others with “the next big thing.” These companies start out as small growth companies as Apple (AAPL) did in 1976. Then, a number of paths lay ahead. They either grow on their own like the Jobs/Wozniak computer maker did, get acquired for an early payday for investors and other stakeholders, or they can be outcompeted leaving investors with a non-performing asset.

Channelchek is a platform that specializes in bringing data and research on small-cap companies, including many varieties of new technology, to the investors that insist on being informed before they place a trade. Discover more on the industries of tomorrow by signing up for notifications in your inbox from Channelchek by registering here.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.reuters.com/markets/us/retail-investors-flock-small-cap-ai-firms-big-tech-battles-share-2023-02-07/

https://www.barrons.com/articles/c3-ai-stock-rally-bull-wall-street-51675441248?mod=Searchresults

Detecting Deepfake Voice is Now Crucial to Security

Image Credit: Kenya Allmond (Flickr)

Deepfake Audio Has a Tell – Researchers Use Fluid Dynamics to Spot Artificial Imposter Voices

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video – for example, via phone calls, radio and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.

To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

Organic vs. Synthetic voices

Humans vocalize by forcing air over the various structures of the vocal tract, including vocal folds, tongue and lips. By rearranging these structures, you alter the acoustical properties of your vocal tract, allowing you to create over 200 distinct sounds, or phonemes. However, human anatomy fundamentally limits the acoustic behavior of these different phonemes, resulting in a relatively small range of correct sounds for each.

In contrast, audio deepfakes are created by first allowing a computer to listen to audio recordings of a targeted victim speaker. Depending on the exact techniques used, the computer might need to listen to as little as 10 to 20 seconds of audio. This audio is used to extract key information about the unique aspects of the victim’s voice.

The attacker selects a phrase for the deepfake to speak and then, using a modified text-to-speech algorithm, generates an audio sample that sounds like the victim saying the selected phrase. This process of creating a single deepfaked audio sample can be accomplished in a matter of seconds, potentially allowing attackers enough flexibility to use the deepfake voice in a conversation.

This article was republished  with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Logan Blue, PhD student in Computer & Information Science & Engineering, University of Florida and Patrick Traynor, Professor of Computer and Information Science and Engineering, University of Florida.

Detecting Audio Deepfakes

The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract. Luckily scientists have techniques to estimate what someone – or some being such as a dinosaur – would sound like based on anatomical measurements of its vocal tract.

We did the reverse. By inverting many of these same techniques, we were able to extract an approximation of a speaker’s vocal tract during a segment of speech. This allowed us to effectively peer into the anatomy of the speaker who created the audio sample.

Deepfaked audio often results in vocal tract reconstructions that resemble drinking straws rather than biological vocal tracts. Logan Blue (The Conversation)

From here, we hypothesized that deepfake audio samples would fail to be constrained by the same anatomical limitations humans have. In other words, the analysis of deepfaked audio samples simulated vocal tract shapes that do not exist in people.

Our testing results not only confirmed our hypothesis but revealed something interesting. When extracting vocal tract estimations from deepfake audio, we found that the estimations were often comically incorrect. For instance, it was common for deepfake audio to result in vocal tracts with the same relative diameter and consistency as a drinking straw, in contrast to human vocal tracts, which are much wider and more variable in shape.

This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech. By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.

Why this matters

Today’s world is defined by the digital exchange of media and information. Everything from news to entertainment to conversations with loved ones typically happens via digital exchanges. Even in their infancy, deepfake video and audio undermine the confidence people have in these exchanges, effectively limiting their usefulness.

If the digital world is to remain a critical resource for information in people’s lives, effective and secure techniques for determining the source of an audio sample are crucial.