AI is Exciting – and an Ethical Minefield

Four Essential Reads on the Risks and Concerns Over Artificial Intelligence

If you’re like me, you’ve spent a lot of time over the past few months trying to figure out what this AI thing is all about. Large-language models, generative AI, algorithmic bias – it’s a lot for the less tech-savvy of us to sort out, trying to make sense of the myriad headlines about artificial intelligence swirling about.

But understanding how AI works is just part of the dilemma. As a society, we’re also confronting concerns about its social, psychological and ethical effects. Here we spotlight articles about the deeper questions the AI revolution raises about bias and inequality, the learning process, its impact on jobs, and even the artistic process.

Ethical Debt

When a company rushes software to market, it often accrues “technical debt”: the cost of having to fix bugs after a program is released, instead of ironing them out beforehand.

There are examples of this in AI as companies race ahead to compete with each other. More alarming, though, is “ethical debt,” when development teams haven’t considered possible social or ethical harms – how AI could replace human jobs, for example, or when algorithms end up reinforcing biases.

Casey Fiesler, a technology ethics expert at the University of Colorado Boulder, wrote that she’s “a technology optimist who thinks and prepares like a pessimist”: someone who puts in time speculating about what might go wrong.

That kind of speculation is an especially useful skill for technologists trying to envision consequences that might not impact them, Fiesler explained, but that could hurt “marginalized groups that are underrepresented” in tech fields. When it comes to ethical debt, she noted, “the people who incur it are rarely the people who pay for it in the end.”

Is Anybody There?

AI programs’ abilities can give the impression that they are sentient, but they’re not, explained Nir Eisikovits, director of the Applied Ethics Center at the University of Massachusetts Boston. “ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less,” he wrote.

But saying AI isn’t conscious doesn’t mean it’s harmless.

“To me,” Eisikovits explained, “the pressing question is not whether machines are sentient but why it is so easy for us to imagine that they are.” Humans easily project human features onto just about anything, including technology. That tendency to anthropomorphize “points to real risks of psychological entanglement with technology,” according to Eisikovits, who studies AI’s impact on how people understand themselves.

Considering how many people talk to their pets and cars, it shouldn’t be a surprise that chatbots can come to mean so much to people who engage with them. The next steps, though, are “strong guardrails” to prevent programs from taking advantage of that emotional connection.

Putting Pen to Paper

From the start, ChatGPT fueled parents’ and teachers’ fears about cheating. How could educators – or college admissions officers, for that matter – figure out if an essay was written by a human or a chatbot?

But AI sparks more fundamental questions about writing, according to Naomi Baron, an American University linguist who studies technology’s effects on language. AI’s potential threat to writing isn’t just about honesty, but about the ability to think itself.

Baron pointed to novelist Flannery O’Connor’s remark that “I write because I don’t know what I think until I read what I say.” In other words, writing isn’t just a way to put your thoughts on paper; it’s a process to help sort out your thoughts in the first place.

AI text generation can be a handy tool, Baron wrote, but “there’s a slippery slope between collaboration and encroachment.” As we wade into a world of more and more AI, it’s key to remember that “crafting written work should be a journey, not just a destination.”

The Value of Art

Generative AI programs don’t just produce text, but also complex images – which have even captured a prize or two. In theory, allowing AI to do nitty-gritty execution might free up human artists’ big-picture creativity.

Not so fast, said Eisikovits and Alec Stubbs, who is also a philosopher at the University of Massachusetts Boston. The finished object viewers appreciate is just part of the process we call “art.” For creator and appreciator alike, what makes art valuable is “the work of making something real and working through its details”: the struggle to turn ideas into something we can see.

This story is a roundup of articles originally puplished in The Conversation. It was compiled by

Molly Jackson, the Religion and Ethics Editor at The Conversation. It includes work from Alec Stubbs, Postdoctoral Fellow in Philosophy, UMass Boston. Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder. Naomi S. Baron, Professor Emerita of Linguistics, American University. And, Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston. It was reprinted with permission.

Taming AI Sooner Rather than Later

Image: AI rendering of futuristic robot photobombing the VP and new AI Czar

Planning Ahead to Avoid an AI Pandora’s Box

Vice President Kamala Harris wasted no time as the newly appointed White House Artificial Intelligence (AI) Czar. She has already met with heads of companies involved in AI and explained that although Artificial intelligence technology has the potential to benefit humanity, the opportunities it allows also come with extreme risk. She is now tasked with spearheading the effort to preemptively prevent a Pandora’s box situation where, once allowed, the bad that results may overshadow the good.

The plan that the administration is devising, overseen by the Vice President, calls for putting in place protections as the technology grows.

On May 4, Harris met with corporate heads of companies leading in AI technology. They included OpenAI, Google and Microsoft. In a tweet from the President’s desk, he is shown thanking the corporate heads in advance for their cooperation. “What you’re doing has enormous potential and enormous danger,” Biden told the CEOs

Image: Twitter (@POTUS)

Amid recent warnings from AI experts that say tyrannical dictators could exploit the developing technology to push disinformation, the White House has allocated $140 million in funding for seven newly created AI research groups. President Biden has said the technology was “one of the most powerful” of our time, then added, “But in order to seize the opportunities it presents, we must first mitigate its risks.”

The full plan unveiled this week is to launch 25 research institutes across the US that will seek assurance from companies, including ChatGPT’s creator OpenAI, that they will ‘participate in a public evaluation.’

The reason for the concern and the actions taken is that many of the world’s best minds have been warning about the dangers of AI, specifically that it could be used against humanity. Serial tech entrepreneur Elon Musk fears AI technology will soon surpass human intelligence and have independent thinking. Put another way; the machines would no longer need to abide by human commands. At the worst currently imagined, they may develop the ability to steal nuclear codes, create pandemics and spark world wars.

After Harris met with tech executives Thursday to discuss reducing potential risks, she said in a statement, “As I shared today with CEOs of companies at the forefront of American AI innovation, the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.”

The sudden elevation of artificial intelligence as needing to be managed came as awareness grew as to just how remarkable and powerful the technology has the potential to become. This broad awareness came as OpenAI released a version of ChatGPT which already had the ability to mimic humanlike thinking and interaction.

Other considerations, and probably many not yet conceived, is that AI can generate humanlike writing and fake images; there are ethical and societal concerns. As an example, the fabricated image at the top of this article was created within three minutes by a new user of an AI program.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/statement-from-vice-president-harris-after-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/

AI Now Represents a Measurable Threat to the Workforce

Image: Tesla Bot (Tesla)

IBM Will Stop Hiring Professionals For Jobs Artificial Intelligence Might Do

Will AI take jobs and replace people in the future? Large companies are now making room for artificial intelligence alternatives by reducing hiring for positions that AI is expected to be able to fill. Bloomberg reported earlier in May that International Business Machines (IBM) expects to pause the hiring for thousands of positions that could be replaced by artificial intelligence in the coming years.

IBM’s CEO Arvind Krishna said in an interview with Bloomberg that hiring will be slowed or suspended for non-customer-facing roles, such as human resources, which makes up make up 26,000 positions at the tech giant. Watercooler talk of how AI may alter the workforce has been part of discussions in offices across the globe in recent months. IBM’s policy helps define in real terms the impact AI will have. Krishna said he expects about 30% of nearly 26,000 positions could be replaced by AI over a five-year period at the company, that’s 7,800 supplanted by AI.

IBM employs 260,000 people, the positions that involve interacting with customers and developing software are not on the chopping block Krishna said in the interview.

Image credit: Focal Foto (Flickr)

Global Job Losses

In a recent Goldman Sachs research report titled, Generative AI could raise global GDP by 7%, it was shown that 66% of all occupations could be partially automated by AI. This could, over time, allow for more productivity. The report’s specifics are written on the contingency that “generative AI delivers on its promised capabilities.” If it does, Goldman believes 300 million jobs could be threatened in the U.S. and Europe. If AI evolves as promised, Goldman estimates that one-fourth of current work could be accomplished using generative AI.

Sci-fi images of a future where robots replace human workers have existed since the word robot came to life in 1920. The current quick acceleration of AI programs, including ChatGPT and other OpenAI.com products, has ignited concerns that society is not yet ready to reckon with a massive shift in how production can be met without payroll.

Should Workers Worry?

Serial entrepreneur Elon Musk is one of the most vocal critics of AI. He is one of the founders of OpenAI, and the robot division at Tesla. In April, Musk claimed in an interview with Tucker Carlson on Fox News that he believes tech executives like Google’s Larry Page are “not taking AI safety seriously enough.” Musk asserts that he’s been called a “speciesist” for raising alarm bells about AI’s impact on humans, his concern is so great that he is moving forward with his own AI company—X.AI. This, he says, is in response to the recklessness of tech firms.

IBM now has digital labor solutions which help customers automate labor-intensive tasks such as data entry. “In digital labor, we are helping finance, accounting, and HR teams save thousands of hours by automating what used to belabor intensive data-entry tasks,” Krishna said on the company’s earnings call on April 19. “These productivity initiatives free up spending for reinvestment and contribute to margin expansion.”

Technology and innovation have always benefitted households in the long term. The industrial revolution, and later the technology revolution, at first did eliminate jobs. Later the human resources made available by machines increased productivity by freeing up people to do more. Productivity, or increased GDP, is equivalent to a wealthier society as GDP per capita increases.

Paul Hoffman

Managing Editor, Channelchek

Source

https://www.ibm.com/investor/events/earnings-1q23

https://www.goldmansachs.com/insights/pages/generative-ai-could-raise-global-gdp-by-7-percent.html

https://fortune.com/2023/03/02/elon-musk-tesla-a-i-humanoid-robots-outnumber-people-economy/

The Coming War Between AI Generated Spam and Junk Mail Filters

Image Credit: This is Engineering (Pexels)

AI-Generated Spam May Soon Be Flooding Your Inbox – It Will Be Personalized to Be Especially Persuasive

Each day, messages from Nigerian princes, peddlers of wonder drugs and promoters of can’t-miss investments choke email inboxes. Improvements to spam filters only seem to inspire new techniques to break through the protections.

Now, the arms race between spam blockers and spam senders is about to escalate with the emergence of a new weapon: generative artificial intelligence. With recent advances in AI made famous by ChatGPT, spammers could have new tools to evade filters, grab people’s attention and convince them to click, buy or give up personal information.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of John Licato, Assistant Professor of Computer Science and Director of AMHR Lab, University of South Florida.

As director of the Advancing Human and Machine Reasoning lab at the University of South Florida, I research the intersection of artificial intelligence, natural language processing and human reasoning. I have studied how AI can learn the individual preferences, beliefs and personality quirks of people.

This can be used to better understand how to interact with people, help them learn or provide them with helpful suggestions. But this also means you should brace for smarter spam that knows your weak spots – and can use them against you.

Spam, Spam, Spam

So, what is spam?

Spam is defined as unsolicited commercial emails sent by an unknown entity. The term is sometimes extended to text messages, direct messages on social media and fake reviews on products. Spammers want to nudge you toward action: buying something, clicking on phishing links, installing malware or changing views.

Spam is profitable. One email blast can make US$1,000 in only a few hours, costing spammers only a few dollars – excluding initial setup. An online pharmaceutical spam campaign might generate around $7,000 per day.

Legitimate advertisers also want to nudge you to action – buying their products, taking their surveys, signing up for newsletters – but whereas a marketer email may link to an established company website and contain an unsubscribe option in accordance with federal regulations, a spam email may not.

Spammers also lack access to mailing lists that users signed up for. Instead, spammers utilize counter-intuitive strategies such as the “Nigerian prince” scam, in which a Nigerian prince claims to need your help to unlock an absurd amount of money, promising to reward you nicely. Savvy digital natives immediately dismiss such pleas, but the absurdity of the request may actually select for naïveté or advanced age, filtering for those most likely to fall for the scams.

Advances in AI, however, mean spammers might not have to rely on such hit-or-miss approaches. AI could allow them to target individuals and make their messages more persuasive based on easily accessible information, such as social media posts.

Future of Spam

Chances are you’ve heard about the advances in generative large language models like ChatGPT. The task these generative LLMs perform is deceptively simple: given a text sequence, predict which token – think of this as a part of a word – comes next. Then, predict which token comes after that. And so on, over and over.

Somehow, training on that task alone, when done with enough text on a large enough LLM, seems to be enough to imbue these models with the ability to perform surprisingly well on a lot of other tasks.

Multiple ways to use the technology have already emerged, showcasing the technology’s ability to quickly adapt to, and learn about, individuals. For example, LLMs can write full emails in your writing style, given only a few examples of how you write. And there’s the classic example – now over a decade old – of Target figuring out a customer was pregnant before her father knew.

Spammers and marketers alike would benefit from being able to predict more about individuals with less data. Given your LinkedIn page, a few posts and a profile image or two, LLM-armed spammers might make reasonably accurate guesses about your political leanings, marital status or life priorities.

Our research showed that LLMs could be used to predict which word an individual will say next with a degree of accuracy far surpassing other AI approaches, in a word-generation task called the semantic fluency task. We also showed that LLMs can take certain types of questions from tests of reasoning abilities and predict how people will respond to that question. This suggests that LLMs already have some knowledge of what typical human reasoning ability looks like.

If spammers make it past initial filters and get you to read an email, click a link or even engage in conversation, their ability to apply customized persuasion increases dramatically. Here again, LLMs can change the game. Early results suggest that LLMs can be used to argue persuasively on topics ranging from politics to public health policy.

Good for the Gander

AI, however, doesn’t favor one side or the other. Spam filters also should benefit from advances in AI, allowing them to erect new barriers to unwanted emails.

Spammers often try to trick filters with special characters, misspelled words or hidden text, relying on the human propensity to forgive small text anomalies – for example, “c1îck h.ere n0w.” But as AI gets better at understanding spam messages, filters could get better at identifying and blocking unwanted spam – and maybe even letting through wanted spam, such as marketing email you’ve explicitly signed up for. Imagine a filter that predicts whether you’d want to read an email before you even read it.

Despite growing concerns about AI – as evidenced by Tesla, SpaceX and Twitter CEO Elon Musk, Apple founder Steve Wozniak and other tech leaders calling for a pause in AI development – a lot of good could come from advances in the technology. AI can help us understand how weaknesses in human reasoning might be exploited by bad actors and come up with ways to counter malevolent activities.

All new technologies can result in both wonder and danger. The difference lies in who creates and controls the tools, and how they are used.

Artificial Intelligence, Speculation, and ‘Technical Debt’

Image Credit: Focal Foto (Flickr)

AI Has Social Consequences, But Who Pays the Price?

As public concern about the ethical and social implications of artificial intelligence keeps growing, it might seem like it’s time to slow down. But inside tech companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

In other words, it’s time to “move fast and break things,” to quote Mark Zuckerberg’s old motto. Of course, when you break things, you might have to fix them later – at a cost.

In software development, the term “technical debt” refers to the implied cost of making future fixes as a consequence of choosing faster, less careful solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that once it does hit the market, you’ll find out what the bugs are and can hopefully fix them then.

However, negative news stories about generative AI tend not to be about these kinds of bugs. Instead, much of the concern is about AI systems amplifying harmful biases and stereotypes and students using AI deceptively. We hear about privacy concerns, people being fooled by misinformation, labor exploitation and fears about how quickly human jobs may be replaced, to name a few. These problems are not software glitches. Realizing that a technology reinforces oppression or bias is very different from learning that a button on a website doesn’t work.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder.

As a technology ethics educator and researcher, I have thought a lot about these kinds of “bugs.” What’s accruing here is not just technical debt, but ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.

Off to the Races

As soon as OpenAI’s ChatGPT was released in November 2022, the starter pistol for today’s AI race, I imagined the debt ledger starting to fill.

Within months, Google and Microsoft released their own generative AI programs, which seemed rushed to market in an effort to keep up. Google’s stock prices fell when its chatbot Bard confidently supplied a wrong answer during the company’s own demo. One might expect Microsoft to be particularly cautious when it comes to chatbots, considering Tay, its Twitter-based bot that was almost immediately shut down in 2016 after spouting misogynist and white supremacist talking points. Yet early conversations with the AI-powered Bing left some users unsettled, and it has repeated known misinformation.

When the social debt of these rushed releases comes due, I expect that we will hear mention of unintended or unanticipated consequences. After all, even with ethical guidelines in place, it’s not as if OpenAI, Microsoft or Google can see the future. How can someone know what societal problems might emerge before the technology is even fully developed?

The root of this dilemma is uncertainty, which is a common side effect of many technological revolutions, but magnified in the case of artificial intelligence. After all, part of the point of AI is that its actions are not known in advance. AI may not be designed to produce negative consequences, but it is designed to produce the unforeseen.

However, it is disingenuous to suggest that technologists cannot accurately speculate about what many of these consequences might be. By now, there have been countless examples of how AI can reproduce bias and exacerbate social inequities, but these problems are rarely publicly identified by tech companies themselves. It was external researchers who found racial bias in widely used commercial facial analysis systems, for example, and in a medical risk prediction algorithm that was being applied to around 200 million Americans. Academics and advocacy or research organizations like the Algorithmic Justice League and the Distributed AI Research Institute are doing much of this work: identifying harms after the fact. And this pattern doesn’t seem likely to change if companies keep firing ethicists.

Speculating – Responsibly

I sometimes describe myself as a technology optimist who thinks and prepares like a pessimist. The only way to decrease ethical debt is to take the time to think ahead about things that might go wrong – but this is not something that technologists are necessarily taught to do.

Scientist and iconic science fiction writer Isaac Asimov once said that sci-fi authors “foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” Of course, science fiction writers do not tend to be tasked with developing these solutions – but right now, the technologists developing AI are.

So how can AI designers learn to think more like science fiction writers? One of my current research projects focuses on developing ways to support this process of ethical speculation. I don’t mean designing with far-off robot wars in mind; I mean the ability to consider future consequences at all, including in the very near future.

This is a topic I’ve been exploring in my teaching for some time, encouraging students to think through the ethical implications of sci-fi technology in order to prepare them to do the same with technology they might create. One exercise I developed is called the Black Mirror Writers Room, where students speculate about possible negative consequences of technology like social media algorithms and self-driving cars. Often these discussions are based on patterns from the past or the potential for bad actors.

Ph.D. candidate Shamika Klassen and I evaluated this teaching exercise in a research study and found that there are pedagogical benefits to encouraging computing students to imagine what might go wrong in the future – and then brainstorm about how we might avoid that future in the first place.

However, the purpose isn’t to prepare students for those far-flung futures; it is to teach speculation as a skill that can be applied immediately. This skill is especially important for helping students imagine harm to other people, since technological harms often disproportionately impact marginalized groups that are underrepresented in computing professions. The next steps for my research are to translate these ethical speculation strategies for real-world technology design teams.

Time to Hit Pause?

In March 2023, an open letter with thousands of signatures advocated for pausing training AI systems more powerful than GPT-4. Unchecked, AI development “might eventually outnumber, outsmart, obsolete and replace us,” or even cause a “loss of control of our civilization,” its writers warned.

As critiques of the letter point out, this focus on hypothetical risks ignores actual harms happening today. Nevertheless, I think there is little disagreement among AI ethicists that AI development needs to slow down – that developers throwing up their hands and citing “unintended consequences” is not going to cut it.

We are only a few months into the “AI race” picking up significant speed, and I think it’s already clear that ethical considerations are being left in the dust. But the debt will come due eventually – and history suggests that Big Tech executives and investors may not be the ones paying for it.

Blackboxstocks (BLBX) – Announces Merger; Reports 4Q22 Results


Tuesday, April 18, 2023

Blackboxstocks, Inc. is a financial technology and social media hybrid platform offering real-time proprietary analytics and news for stock and options traders of all levels. Our web-based software employs “predictive technology” enhanced by artificial intelligence to find volatility and unusual market activity that may result in the rapid change in the price of a stock or option. Blackbox continuously scans the NASDAQ, New York Stock Exchange, CBOE, and all other options markets, analyzing over 10,000 stocks and up to 1,500,000 options contracts multiple times per second. We provide our users with a fully interactive social media platform that is integrated into our dashboard, enabling our users to exchange information and ideas quickly and efficiently through a common network. We recently introduced a live audio/video feature that allows our members to broadcast on their own channels to share trade strategies and market insight within the Blackbox community. Blackbox is a SaaS company with a growing base of users that spans 42 countries; current subscription fees are $99.97 per month or $959.00 annually. For more information, go to: www.blackboxstocks.com .

Joe Gomes, Managing Director – Generalist Analyst, Noble Capital Markets, Inc.

Joshua Zoepfel, Research Associate, Noble Capital Markets, Inc.

Refer to the full report for the price target, fundamental analysis, and rating.

Merger. Blackboxstocks announced its intent to merger with Evtec Group. BLBX shareholders are expected to retain 8.34% of the combined company’s common stock post-merger. While details of the transaction are limited, management believes the transaction will provide significant and long-term value for BLBX shareholders. Blackboxstocks will operate as a subsidiary of Evtec. In its just filed 10-K for 2022, the Company noted it was exploring strategic alternatives.

Who Is Evtec Group? A private U.K.-based company, Evtec Group is a leading parts supplier for luxury brands in the EV and performance automotive market. The acquisition of Blackboxstocks provides Evtec with a pathway to become publicly traded in the U.S., while enabling Blackboxstocks access to capital needed to take the next step forward in its business, in our view.


Get the Full Report

Equity Research is available at no cost to Registered users of Channelchek. Not a Member? Click ‘Join’ to join the Channelchek Community. There is no cost to register, and we never collect credit card information.

This Company Sponsored Research is provided by Noble Capital Markets, Inc., a FINRA and S.E.C. registered broker-dealer (B/D).

*Analyst certification and important disclosures included in the full report. NOTE: investment decisions should not be based upon the content of this research summary. Proper due diligence is required before making any investment decision. 

Regulate AI?  Elon Musk Thinks it’s an Intelligent Idea

Image Credit: Steve Jurvetson (Flickr)

Elon Musk Unveils How He Expects to Approach Artificial Intelligence

The CEO of SpaceX, Twitter, Tesla, as well as the founder of The Boring Company , and Neuralink, says he wants to do something to serve humanity. Elon Musk has been concerned that artificial intelligence may have the propensity to turn against mankind. He said the best way to avoid the problem is to make artificial intelligence curious. “I’m going to start something which I call ‘TruthGPT’ or a maximum truth-seeking AI that tries to understand the nature of the universe,” Musk said in an interview with Tucker Carlson. The billionaire thinks that an AI that cares about understanding the universe is “unlikely to annihilate humans” as we’re an “interesting part of the universe, hopefully.” During the discussion, he emphasized the project will differ from competitors, such as OpenAI’s ChatGPT and Google’s Bard, by caring about understanding the universe.

This ambitious new goal of Musk’s was introduced with few details about the project, so it remains unclear how, exactly a machine becomes curious. He did repeat that he considers AI dangerous if mismanaged, with a “potential for civilizational destruction.” In fact, he called for some level of government oversight over AI projects. Musk isn’t new to the technology; he is actually one of the co-founders of OpenAI, the company that has been making headlines with its AI chatbot named ChatGPT.

The new technology would likely compete with AI efforts by Sam Altman-led OpenAI, which as mentioned was initially funded by Musk, Google’s DeepMind, and other AI initiatives around the world.

Regulating A.I.

Musk told Carlson he envisions a regulatory agency that “initially seeks insight into AI, then solicits opinion from industry, and then has proposed rule-making,” something like the Federal Aviation Administration and how it interacts with aviation and aerospace companies. Once agency and industry-accepted rules in place, “I think we’ll have a better chance of advanced AI being beneficial to humanity,” Musk said. Musk signed a letter calling for a pause on advanced AI research because he is part of a group of signers that believe it can potentially harm society.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter stated.

Part two of the interview is scheduled to air at 8 PM ET April 18 on Fox News.

Paul Hoffman

Managing Editor, Channelchek

https://www.foxnews.com/media/elon-musk-develop-truthgpt-warns-civilizational-destruction-ai

https://www.cnbc.com/2023/04/18/musk-calls-plans-truthgpt-ai-to-rival-openai-deepmind.html

What Americans Really Think of Cryptocurrency

Image Credit: Duncan Rawlinson (Flickr)

Does the News Chatter Surrounding Cryptocurrencies Match the Interest in the Asset Class?

Over the 14 years since bitcoin sprung to life, expectations have ranged from overwhelming enthusiasm over its possibilities to fear of the risks inherent in an, as yet, not integrated payment method. A recent 50% run up in bitcoin has refired up the believers, but the most heard about crypto is still valued at less than half of its high point. Issues beyond volatility that cause some to disregard cryptocurrencies as a payment method are regulatory threats, the environmental cost of mining, and failed exchanges. During the week March 13-19, Pew Research Center conducted a survey measuring usage, confidence, and investment success. The survey is important for those paying attention to crypto as it cuts through our personal opinions and offers less biased statistics.

Survey Says…

Most Americans, 88% have heard of cryptocurrency. Almost 40% of those that are aware of crypto told surveyors they are not at all confident in the reliability and safety of crypto, with an additional 36% not very confident. Of the results for those that responded that they are extremely confident the result is 4%, and 2% as very confident.   Of those that have heard of it, 18% say they are somewhat confident.

Digital technology is shown to be less embraced with age. Although the current concern for crypto is high, some age groups have a greater concern than others. This is reflected in that those 50 and older who know about cryptocurrency and are more inclined to say 85% they are not confident in its reliability and safety. Compare this to those adults 49 and younger, where the figure drops to 66%.   

Does sex play a role in skepticism toward cryptocurrencies? 80% of women say they are not confident in it, compared with 71% of men out of the 88% that have heard of crypto.

Does experience lead to acceptance, or acceptance lead to experience? For those that invested in one or more digital currencies, 20% say they are extremely or very confident that it is safe and reliable. For those that have no experience investing in it, the slice drops to 2%. It is worth understanding that of the group that has had experience with crypto, 43% still  responded that they are not very or not at all confident in it.

Cryptocurrency Usage in the U.S.

Younger males are more likely to use cryptocurrency compared with men 50 and older and women overall. The number of men 18-29 that have used crypto is more than double that of woman of the same age, 41% of men ages 18 to 29 compared with 16% of women in the same age range.

Adults with upper incomes that have used crypto totaled 22%, with middle incomes slightly less at 19%. Lower incomes that have ever invested in, traded or used cryptocurrency compared at 13%.  

Few that have invested in or transacted using cryptocurrency used it for the first time within the past year. Pew Research asked when they first used cryptocurrency, 74% of those who have ever invested in, traded, or used cryptocurrency say they did for the first time one to five years ago. Only 16% say they first did this within the past year, and 10% more than five years ago.

For college graduates, 25% and those with some college experience, 20% showed they were more likely than those with just a high school education or less, 10% to answer that their cryptocurrency investments hurt their personal finances.

Results of Investment

Of those that have invested in crypto, 15% say their investments have done better than expected, 32% say they have done about the same as expected and 7% are unsure. 19% of cryptocurrency users say the investments have hurt their personal finances at least a little.

Most users, 45% indicated their investments performed worse than expected.

Measuring the impact the speculation had on users’ personal finances, three-in-five users (60%) say that they have neither helped nor hurt. Roughly equal shares say that these investments have helped (20%) or hurt (19%) their finances. Just 7% say cryptocurrency has helped their finances a lot and 3% say it has hurt a lot. ­

Take Away

There seems to be far more noise reporting cryptocurrencies than activity or actual usage. This could mean a number of things. One could read into this that the asset’s potential when the fear lifts are high and the potential includes a large percentage of those that are now keeping away. The argument suggests that the ongoing dramatic headlines are warranted since once the potential is realized, there could be much greater movement than we have already seen. Bitcoin had once gone from pennies to $68,000 $USD. Another reason for so much news coverage for an asset class that is favored is it is still novel, so we are all evaluating the asset class as investors; since we’re showing interest or intrigue, news services will report on it to gain audience. If we turn our attention elsewhere, that is then what we will hear more about.

It is truly a speculative asset class with little history. While some are betting everything on crypto, far more are currently just spectators on the sidelines. The hype and attention it is currently receiving may not match actual investor interest.

Paul Hoffman

Managing Editor, Channelchek

Source

https://www.pewresearch.org/wp-content/uploads/2023/04/sr_2023.4.10_crypto_topline.pdf

Twitter is Now Seated with eTORO, Which is a Breakthrough Expansion for Both

Image Credit: Web Summit (Flickr)

Elon Musk Announces New Financial Functionality on Twitter

Starting today, Twitter will provide tweeters the ability to buy and sell stocks and crypto on its platform via eTORO. Twitter owner, Elon Musk has been indicating he intends to turn the popular micro-blogging platform into a “super app.” Today’s move shows substantial headway in allowing financial transactions to be conducted on the social media platform. Other company goals since Musk’s purchase of the company include ride hailing, and attracting video influencers that may be disenchanted with YouTube restrictions on speech.  

What Will the Twitter eTORO Partnership Provide?

Founded in 2007, eTORO has become one of the largest social investment networks and trading platforms. According to its website, it is “built on social collaboration and investor education: a community where users can connect, share, and learn.”

Twitter will partner with the platform to allow users (known as tweeters and Twitterers) to trade stocks and cryptocurrencies as part of a deal with the social investing company.

This partnership will provide access to view charts and trade stocks, cryptocurrencies, and other investment assets from eToro via its mobile platform. Together this significantly expands real-time trading data available to users who already have access on Twitter to real-time data, however this arrangement adds all the bells and whistles a modern trading app can provide.

Twitter will be expanding its use of cashtags as well. Twitter added pricing data for $Cashtags (company ticker preceded by “$”) in December 2022. Since January, there have been more than 420 million searches using Cashtags – the number of searches averages 4.7 million a day.

eToro CEO Yoni Assia told CNBC the deal will help better connect the two brands, adding that in recent years its users have increasingly turned to Twitter to “educate themselves about the markets.”

Assia said there is a great deal of “very high quality” content available in real-time and that the partnership with Twitter will help eToro expand to reach new audiences tapping this as a source of information.

Update on Elon

After Musk’s purchase of Twitter, many advertisers stepped back and watched to see how far the company would go to allow less moderated interaction. On Wednesday (April 12) Musk said that “almost all” advertisers had returned to the app. However, Stellantis and Volkswagen, two large competitors with Musk run Tesla, said they do not yet plan to resume advertising.

Musk told a Morgan Stanley conference last month he wants Twitter to become “the biggest financial institution in the world.” This begs those that follow Musk to ask, “Why stop there, why not include Mars?”

What Else

Be sure to follow Channelchek on Twitter (@channelchek) to stay up to date on market insights, news, videos, and of course, top-tier investment analyst research on small and microcap opportunities.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.etoro.com/en-us/about/

https://www.cnbc.com/2023/04/13/twitter-to-let-users-access-stocks-crypto-via-etoro-in-finance-push.html?__source=iosappshare%7Ccom.apple.UIKit.activity.PostToTwitter

https://www.forbes.com/sites/roberthart/2023/04/13/twitter-will-let-users-buy-stocks-and-crypto-as-elon-musk-pushes-for-everything-app/?sh=332662a26882

https://www.bloomberg.com/news/live-blog/2023-03-07/elon-musk-speaks-at-morgan-stanley-conference

Blackboxstocks (BLBX) – A Reverse Stock Split at 1-for-4


Tuesday, April 11, 2023

Blackboxstocks, Inc. is a financial technology and social media hybrid platform offering real-time proprietary analytics and news for stock and options traders of all levels. Our web-based software employs “predictive technology” enhanced by artificial intelligence to find volatility and unusual market activity that may result in the rapid change in the price of a stock or option. Blackbox continuously scans the NASDAQ, New York Stock Exchange, CBOE, and all other options markets, analyzing over 10,000 stocks and up to 1,500,000 options contracts multiple times per second. We provide our users with a fully interactive social media platform that is integrated into our dashboard, enabling our users to exchange information and ideas quickly and efficiently through a common network. We recently introduced a live audio/video feature that allows our members to broadcast on their own channels to share trade strategies and market insight within the Blackbox community. Blackbox is a SaaS company with a growing base of users that spans 42 countries; current subscription fees are $99.97 per month or $959.00 annually. For more information, go to: www.blackboxstocks.com .

Joe Gomes, Managing Director – Generalist Analyst, Noble Capital Markets, Inc.

Joshua Zoepfel, Research Associate, Noble Capital Markets, Inc.

Refer to the full report for the price target, fundamental analysis, and rating.

A Ratio Set. Yesterday, Blackboxstocks announced that the Company has filed an amendment to the Company’s articles of incorporation with the Nevada Secretary of State to set a Reverse Stock Split ratio of one-for-four. The amendment took effect on April 10, 2023 at 4:01 p.m. Eastern Daylight Time, and split-adjusted basis trading begins on April 11, 2023. The exchange agent for the split will be Securities Transfer Corporation.

The Process. The amendment process to the articles was started last month when the Board of Directors of the Company adopted resolutions advising and recommending to stockholders to approve a reverse stock split of one-for-seven. The stockholders voted to approve the split and amendment in the same month. The Board later approved the split ratio to be at one-to-four on April 7, 2023.


Get the Full Report

Equity Research is available at no cost to Registered users of Channelchek. Not a Member? Click ‘Join’ to join the Channelchek Community. There is no cost to register, and we never collect credit card information.

This Company Sponsored Research is provided by Noble Capital Markets, Inc., a FINRA and S.E.C. registered broker-dealer (B/D).

*Analyst certification and important disclosures included in the full report. NOTE: investment decisions should not be based upon the content of this research summary. Proper due diligence is required before making any investment decision. 

Unexpected Adjustments Among Today’s Self-Directed Investors

Image Credit: Focal Foto (Flickr)

How Decision-Making and Market Impact is Shifting for Retail Investors

Retail investors’ preferences change over time. This impacts sector strength and the overall direction of markets. Even the methods of interacting with exchanges change as newer products like trading apps, artificial intelligence, and exchange-traded products (ETP) become available.

The influence retail has is growing, and anecdotally shifting preferences happen more quickly. Within this category, there are self-directed investors with different knowledge bases and at different stages of their lives. As people move through different stages, their concerns, outlooks, and risk tolerances adjust. Nasdaq just published its second annual survey of retail investors to measure how their interests are changing and what impact that may have. The survey of 2,000 investors from Gen Z to Baby Boomers uncovered some surprising trends in decision-making, fears, comfort zones, and asset class preferences.

Generational Groupings

There were a number of commonalities exposed by the Nasdaq survey between the different generations. They all listed their greatest concerns to be inflation and recession, but while the youngest (Gen Z, born 1997 – 2012) found housing and real estate a deep concern, the oldest group (Baby Boomers, born 1946 – 1964) are more concerned about tax rate changes. The generations in the middle (Gen X born 1965 –1980) and (Millennials born 1981 – 1996) show a greater concern over interest rate changes.

The survey question sought to understand how much time investors in each generation spent researching buy and sell investment decisions. Of Gen Z, on average 48% spent less than an hour, while 3% of these younger adults evaluated the transaction for at least a month. The next age category, Millennials, spent a bit more time on diligence. Only 28% would buy or sell with less than an hour of thought put into the transaction. Of this group, 4% took a month or longer to decide. This trend toward more time researching research continued as the survey reveals the Gen X greater propensity to spend more time evaluating before a purchase. Only 15% would press the buy or sell button with less than an hour spent understanding the investment – 7% of Gen X investors say they take a month or longer.

A big difference between the youngest and the oldest, is that among the Gen Z investors, although almost half said they spend fewer than 60 minutes researching, 0% said they did not research at all. Of the Baby Boomers surveyed, 24% indicated they spend no time researching before they buy or sell. It’s unclear if this is because the older group is less tech savvy, hires a professional to do the research, or believes they have the knowledge to move without digging deeper.

Overlap in Generational Preferences

Data Sources: Nasdaq

Other Trends

Despite their top concerns listed as recession and inflation, 71% of Gen Z and 50% of Millennials say they are investing more aggressively. This is in stark difference to the 9% of Boomers and 20% of Gen X describing their strategies as more aggressive than the previous year.

The influence of Twitter, Facebook and even TikTok keeps expanding. 73% of Gen Z use TikTok as a source for investment information. This is an 18% increase from the prior year. Baby boomer TikTok investment use rose by 16% to its current 25%.

The investment themes from year-to-year show ESG and crypto interest sinking, while robotics and other autonomous technology is where the focus has increased most. Younger investors are more active in their investments than before, and more frequently conducting their own research ahead of transacting. Investors of all ages are more likely to consider alternative options than they had before, these could include options, cryptocurrencies, exchange traded products, etc.

Competition among brokerage platforms is as fierce as it is in any innovative, tech heavy industry. The availability of advanced technology and commission-free trading have made investing more accessible, especially for the younger investors.

Take Away

The second annual survey conducted by Nasdaq indicates that the retail investor growth and power we’ve experienced in recent years was not a fad, it is growing and becoming more sophisticated. They are more influential and should be understood as they are here to stay. This is expected to continue to disrupt and influence markets dramatically.

As retail trends take a higher position of importance in defining the day-to-day challenges of investing and mapping the markets’ future, these self-directed investors are finding more services to accommodate them. One source is the Channelchek platform where retail and institutional investors, of all ages can review research reports, absorb video discussions with management of interesting opportunities, expand understanding through daily articles, and, if relevant, attend a roadshow to meet a particular company’s management.  

Signup for Channelchek emails and full access here.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.nasdaq.com/articles/retail-revival%3A-how-a-year-of-market-volatility-reshaped-investor-strategies

https://nd.nasdaq.com/GENZ

https://nd.nasdaq.com/Millennials

https://nd.nasdaq.com/GENX

https://nd.nasdaq.com/BabyBoomers

About the Bitcoin to $1 Million by Summer 2023 Wager

Image Credit: Fortune Brainstorm TECH (Flickr)

Are Balaji Srinivasan and Cathie Wood Right About the Future Value of Bitcoin?

The former Chief Technology Officer (CTO) of Coinbase, is either extremely bullish on Bitcoin, or has other reasons for his tweet that had set off a huge price jump in the cryptocurrency. Balaji Srinivasan is a very influential investor, especially in the tech space. He confirmed last Friday, belief in a bet he made in March that within 90 days, bitcoin would reach $1 million in value per token. At stake in the bet is $2 million. For crypto investors trying to understand the strong conviction going into the wager, they may first need to understand the person behind the tweet.

Who is Balaji Srinivasan

The Indian-born, U.S. raised, tech entrepreneur, investor, and academic has a Ph.D. in Electrical Engineering and an MS in Chemical Engineering from the Massachusetts Institute of Technology (MIT). Srinivasan co-founded a number of startups, including Earn.com which is blockchain payments platform, and the genomics company Counsyl. He has worked as a General Partner at a prominent Silicon Valley venture capital firm, and as the Chief Technology Officer at the crypto exchange, Coinbase. 

Srinivasan has a large following as a commentator on the subject of technology and its social and political implications. Popular topics of his numerous articles and talks include the future of technology, the rise of decentralized systems, and the potential impact of emerging technologies on society. The tech guru has lectured at Stanford University and has served as an advisor to the FDA and the World Economic Forum.

Twitter: @balajis

What is Behind this Forecast?

In an ARK Invest podcast last Friday (April 6), Srinivasan explained bitcoin has good momentum and that he still believes it will reach $ 1 million within a three-month time horizon. He cited the concerns over the regional banking crisis that he believes will destabilize the dollar and cause the Fed to dump more dollars into the system. Fear and inflation in the coming months is the driver. Cathie Wood agreed with the direction and potential for bitcoin to hit $1 million, but her reasons were a bit different. She believes fear will be one driver, but reiterated her call for deflation. “We are very positive about Bitcoin as well. But your forecast was in the context of hyperinflation associated with fiat currencies. Our optimism is more of a function of fears of deflation and counter-party risk. Both of those should accrue to Bitcoin’s benefit,” Wood explained in her company’s podcast.

The bet and the likelihood that bitcoin-will-hit-$1-million-by-summer prediction seems on the surface to be highly improbable. It would take immense capital flows into the cryptocurrency and there is doubt the exchanges would be able to handle the migration of assets. Also, the question of what would prompt the run from traditional currency to cause a skyrocketing bitcoin, has still not been satisfactorily defined.

The one-hour and 17-minute podcast available at the link below under “Sources” is nonetheless thought provoking. These are two well-regarded tech analysts, standing behind something that sounds outlandish.

Another possible explanation for his outward conviction is that this isn’t a risky bet for Balaji. He’s presumed to own a considerable amount of bitcoin. The tick up on news of his bet (bitcoin is up near 25% since his tweet) could more than offset a $2 million loss on the wager. The timing of the value increase in BTC makes it appear that any loss could be self-funded by the attention it may have given the cryptocurrency.

Take Away

Bitcoin is higher than it had been when tech guru Balaji Srinivasan placed his public wager. However, at $28,500 it would still have to rise by $971,500. over the next few months. Supporting the idea that bitcoin is going up substantially, are two tech and disruption gurus whose thoughts are worth considering alongside your own observations.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://ark-invest.com/podcasts/

https://www.coindesk.com/consensus-magazine/2023/04/01/balaji-srinivasans-1m-bitcoin-bet-could-be-right-but-i-hope-hes-wrong/

The FDA’s Action Plan Regarding Artificial Intelligence and Machine Learning

Image Credit:  Interscatter Data Sharing Contact Lens, UW News (Flickr)

The Challenges Surrounding AI/ML are Taken Head on by the FDA

Should artificial intelligence or machine learning (AI/ML) be allowed to alter FDA approved software in medical devices? If so, where should the guardrails be set? The discussions and debates surrounding AI/ML are heated; some believe the technology may destroy humanity, while others look forward to the speed of advancement it will allow. The FDA is getting out ahead on this debate. This week the agency drafted a list of “guiding principles” intended to begin developing best practices for machine learning within medical devices.

Background

The FDA views its role as protecting patients while at the same time avoiding standing in the way of progress. In the case of ML, not preventing the modification of medical treatments or procedures that would improve outcomes. AI/ML has the potential to more quickly evaluate data sets, improve diagnosis, adjust how used, and overall alter processes based on what is learned.  

On April 3, the FDA drafted AI-Enabled Medical Device Life Cycle Plan Guidance, with a comment period ending July 3, 2023.  The U.S. regulator’s proposal attempts to find science-based requirements for medical devices powered by artificial intelligence and machine learning. The overall goal is to not slow the implementation of improved new devices that may quickly be modified, updated, and rapidly deliver an improved response to new data.  

Greg Aurand, Senior Healthcare Services & Medical Devices Analyst at Noble Capital Markets, summed up the purpose for the FDA’s actions in this way: “The FDA needs to move cautiously, but they don’t wish to slow down healthcare improvements on an ongoing basis.” Aurand gave examples where machine learning has the potential to make better assessments, better decipher data sets such as antibiotic resistance, and improve results while perhaps taming medical expenses. He said, “new draft guidelines from the FDA should make it easier for approval of modifications to occur so previously unrecognized improvements may occur within the guidelines, and the process is less static.”

How is Artificial Intelligence Likely to Revise Medical Devices?

As is written into the FDA guidance, “Artificial intelligence (AI) and machine learning (ML) technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care.”  

The FDA accepts that a great benefit of AI/ML in software is its ability to learn from real-world use and experience, then the ability to improve its own performance.

How is the FDA Expected to Regulate AI/ML Devices?  

Traditionally, the FDA reviews medical devices and improvements through a premarket pathway for approval. The FDA may also review and clear modifications to medical devices, including software as a medical device, depending on the significance or risk posed to patients by that modification. The industry is going through a paradigm shift which the FDA is helping to enable.

The FDA’s current paradigm of medical device regulation was not designed for adaptive artificial intelligence. Under the FDA’s current approach to software modifications it anticipates that many of these artificial intelligence and machine learning-driven software changes to a device need a premarket review. The new regulation is expected to create broader parameters of pre-approval to allow adjustments with set allowable boundaries.

A new framework envisioned by the FDA includes a “predetermined change control plan” in premarket submissions. This plan would include the types of anticipated modifications, referred to as “Software as a Medical Device Pre-Specifications”.  The associated methodology used to implement those changes in a measured and controlled approach that manages risk the FDA calls the “Algorithm Change Protocol.”

Take Away

Artificial intelligence will transform many industries, and while some want to hit the pause button on progress, the FDA is trying to define how much control can be left to machine learning. The Guidance released in April with a three-month comment period is expected to allow medical equipment and software designers to progress into the unknown, with all stakeholders having as their goal better outcomes for patients.

If you wish to send requested comments to the FDA, the agency requests it be received by July 3, 2023 to ensure the agency considers your comment on the draft guidance before it begins work on the final version of the guidance.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial

https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device

https://www.fda.gov/media/145022/download

https://www.fda.gov/media/166704/download