The Coming War Between AI Generated Spam and Junk Mail Filters

Image Credit: This is Engineering (Pexels)

AI-Generated Spam May Soon Be Flooding Your Inbox – It Will Be Personalized to Be Especially Persuasive

Each day, messages from Nigerian princes, peddlers of wonder drugs and promoters of can’t-miss investments choke email inboxes. Improvements to spam filters only seem to inspire new techniques to break through the protections.

Now, the arms race between spam blockers and spam senders is about to escalate with the emergence of a new weapon: generative artificial intelligence. With recent advances in AI made famous by ChatGPT, spammers could have new tools to evade filters, grab people’s attention and convince them to click, buy or give up personal information.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of John Licato, Assistant Professor of Computer Science and Director of AMHR Lab, University of South Florida.

As director of the Advancing Human and Machine Reasoning lab at the University of South Florida, I research the intersection of artificial intelligence, natural language processing and human reasoning. I have studied how AI can learn the individual preferences, beliefs and personality quirks of people.

This can be used to better understand how to interact with people, help them learn or provide them with helpful suggestions. But this also means you should brace for smarter spam that knows your weak spots – and can use them against you.

Spam, Spam, Spam

So, what is spam?

Spam is defined as unsolicited commercial emails sent by an unknown entity. The term is sometimes extended to text messages, direct messages on social media and fake reviews on products. Spammers want to nudge you toward action: buying something, clicking on phishing links, installing malware or changing views.

Spam is profitable. One email blast can make US$1,000 in only a few hours, costing spammers only a few dollars – excluding initial setup. An online pharmaceutical spam campaign might generate around $7,000 per day.

Legitimate advertisers also want to nudge you to action – buying their products, taking their surveys, signing up for newsletters – but whereas a marketer email may link to an established company website and contain an unsubscribe option in accordance with federal regulations, a spam email may not.

Spammers also lack access to mailing lists that users signed up for. Instead, spammers utilize counter-intuitive strategies such as the “Nigerian prince” scam, in which a Nigerian prince claims to need your help to unlock an absurd amount of money, promising to reward you nicely. Savvy digital natives immediately dismiss such pleas, but the absurdity of the request may actually select for naïveté or advanced age, filtering for those most likely to fall for the scams.

Advances in AI, however, mean spammers might not have to rely on such hit-or-miss approaches. AI could allow them to target individuals and make their messages more persuasive based on easily accessible information, such as social media posts.

Future of Spam

Chances are you’ve heard about the advances in generative large language models like ChatGPT. The task these generative LLMs perform is deceptively simple: given a text sequence, predict which token – think of this as a part of a word – comes next. Then, predict which token comes after that. And so on, over and over.

Somehow, training on that task alone, when done with enough text on a large enough LLM, seems to be enough to imbue these models with the ability to perform surprisingly well on a lot of other tasks.

Multiple ways to use the technology have already emerged, showcasing the technology’s ability to quickly adapt to, and learn about, individuals. For example, LLMs can write full emails in your writing style, given only a few examples of how you write. And there’s the classic example – now over a decade old – of Target figuring out a customer was pregnant before her father knew.

Spammers and marketers alike would benefit from being able to predict more about individuals with less data. Given your LinkedIn page, a few posts and a profile image or two, LLM-armed spammers might make reasonably accurate guesses about your political leanings, marital status or life priorities.

Our research showed that LLMs could be used to predict which word an individual will say next with a degree of accuracy far surpassing other AI approaches, in a word-generation task called the semantic fluency task. We also showed that LLMs can take certain types of questions from tests of reasoning abilities and predict how people will respond to that question. This suggests that LLMs already have some knowledge of what typical human reasoning ability looks like.

If spammers make it past initial filters and get you to read an email, click a link or even engage in conversation, their ability to apply customized persuasion increases dramatically. Here again, LLMs can change the game. Early results suggest that LLMs can be used to argue persuasively on topics ranging from politics to public health policy.

Good for the Gander

AI, however, doesn’t favor one side or the other. Spam filters also should benefit from advances in AI, allowing them to erect new barriers to unwanted emails.

Spammers often try to trick filters with special characters, misspelled words or hidden text, relying on the human propensity to forgive small text anomalies – for example, “c1îck h.ere n0w.” But as AI gets better at understanding spam messages, filters could get better at identifying and blocking unwanted spam – and maybe even letting through wanted spam, such as marketing email you’ve explicitly signed up for. Imagine a filter that predicts whether you’d want to read an email before you even read it.

Despite growing concerns about AI – as evidenced by Tesla, SpaceX and Twitter CEO Elon Musk, Apple founder Steve Wozniak and other tech leaders calling for a pause in AI development – a lot of good could come from advances in the technology. AI can help us understand how weaknesses in human reasoning might be exploited by bad actors and come up with ways to counter malevolent activities.

All new technologies can result in both wonder and danger. The difference lies in who creates and controls the tools, and how they are used.

Artificial Intelligence, Speculation, and ‘Technical Debt’

Image Credit: Focal Foto (Flickr)

AI Has Social Consequences, But Who Pays the Price?

As public concern about the ethical and social implications of artificial intelligence keeps growing, it might seem like it’s time to slow down. But inside tech companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

In other words, it’s time to “move fast and break things,” to quote Mark Zuckerberg’s old motto. Of course, when you break things, you might have to fix them later – at a cost.

In software development, the term “technical debt” refers to the implied cost of making future fixes as a consequence of choosing faster, less careful solutions now. Rushing to market can mean releasing software that isn’t ready, knowing that once it does hit the market, you’ll find out what the bugs are and can hopefully fix them then.

However, negative news stories about generative AI tend not to be about these kinds of bugs. Instead, much of the concern is about AI systems amplifying harmful biases and stereotypes and students using AI deceptively. We hear about privacy concerns, people being fooled by misinformation, labor exploitation and fears about how quickly human jobs may be replaced, to name a few. These problems are not software glitches. Realizing that a technology reinforces oppression or bias is very different from learning that a button on a website doesn’t work.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder.

As a technology ethics educator and researcher, I have thought a lot about these kinds of “bugs.” What’s accruing here is not just technical debt, but ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.

Off to the Races

As soon as OpenAI’s ChatGPT was released in November 2022, the starter pistol for today’s AI race, I imagined the debt ledger starting to fill.

Within months, Google and Microsoft released their own generative AI programs, which seemed rushed to market in an effort to keep up. Google’s stock prices fell when its chatbot Bard confidently supplied a wrong answer during the company’s own demo. One might expect Microsoft to be particularly cautious when it comes to chatbots, considering Tay, its Twitter-based bot that was almost immediately shut down in 2016 after spouting misogynist and white supremacist talking points. Yet early conversations with the AI-powered Bing left some users unsettled, and it has repeated known misinformation.

When the social debt of these rushed releases comes due, I expect that we will hear mention of unintended or unanticipated consequences. After all, even with ethical guidelines in place, it’s not as if OpenAI, Microsoft or Google can see the future. How can someone know what societal problems might emerge before the technology is even fully developed?

The root of this dilemma is uncertainty, which is a common side effect of many technological revolutions, but magnified in the case of artificial intelligence. After all, part of the point of AI is that its actions are not known in advance. AI may not be designed to produce negative consequences, but it is designed to produce the unforeseen.

However, it is disingenuous to suggest that technologists cannot accurately speculate about what many of these consequences might be. By now, there have been countless examples of how AI can reproduce bias and exacerbate social inequities, but these problems are rarely publicly identified by tech companies themselves. It was external researchers who found racial bias in widely used commercial facial analysis systems, for example, and in a medical risk prediction algorithm that was being applied to around 200 million Americans. Academics and advocacy or research organizations like the Algorithmic Justice League and the Distributed AI Research Institute are doing much of this work: identifying harms after the fact. And this pattern doesn’t seem likely to change if companies keep firing ethicists.

Speculating – Responsibly

I sometimes describe myself as a technology optimist who thinks and prepares like a pessimist. The only way to decrease ethical debt is to take the time to think ahead about things that might go wrong – but this is not something that technologists are necessarily taught to do.

Scientist and iconic science fiction writer Isaac Asimov once said that sci-fi authors “foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” Of course, science fiction writers do not tend to be tasked with developing these solutions – but right now, the technologists developing AI are.

So how can AI designers learn to think more like science fiction writers? One of my current research projects focuses on developing ways to support this process of ethical speculation. I don’t mean designing with far-off robot wars in mind; I mean the ability to consider future consequences at all, including in the very near future.

This is a topic I’ve been exploring in my teaching for some time, encouraging students to think through the ethical implications of sci-fi technology in order to prepare them to do the same with technology they might create. One exercise I developed is called the Black Mirror Writers Room, where students speculate about possible negative consequences of technology like social media algorithms and self-driving cars. Often these discussions are based on patterns from the past or the potential for bad actors.

Ph.D. candidate Shamika Klassen and I evaluated this teaching exercise in a research study and found that there are pedagogical benefits to encouraging computing students to imagine what might go wrong in the future – and then brainstorm about how we might avoid that future in the first place.

However, the purpose isn’t to prepare students for those far-flung futures; it is to teach speculation as a skill that can be applied immediately. This skill is especially important for helping students imagine harm to other people, since technological harms often disproportionately impact marginalized groups that are underrepresented in computing professions. The next steps for my research are to translate these ethical speculation strategies for real-world technology design teams.

Time to Hit Pause?

In March 2023, an open letter with thousands of signatures advocated for pausing training AI systems more powerful than GPT-4. Unchecked, AI development “might eventually outnumber, outsmart, obsolete and replace us,” or even cause a “loss of control of our civilization,” its writers warned.

As critiques of the letter point out, this focus on hypothetical risks ignores actual harms happening today. Nevertheless, I think there is little disagreement among AI ethicists that AI development needs to slow down – that developers throwing up their hands and citing “unintended consequences” is not going to cut it.

We are only a few months into the “AI race” picking up significant speed, and I think it’s already clear that ethical considerations are being left in the dust. But the debt will come due eventually – and history suggests that Big Tech executives and investors may not be the ones paying for it.

Blackboxstocks (BLBX) – Announces Merger; Reports 4Q22 Results


Tuesday, April 18, 2023

Blackboxstocks, Inc. is a financial technology and social media hybrid platform offering real-time proprietary analytics and news for stock and options traders of all levels. Our web-based software employs “predictive technology” enhanced by artificial intelligence to find volatility and unusual market activity that may result in the rapid change in the price of a stock or option. Blackbox continuously scans the NASDAQ, New York Stock Exchange, CBOE, and all other options markets, analyzing over 10,000 stocks and up to 1,500,000 options contracts multiple times per second. We provide our users with a fully interactive social media platform that is integrated into our dashboard, enabling our users to exchange information and ideas quickly and efficiently through a common network. We recently introduced a live audio/video feature that allows our members to broadcast on their own channels to share trade strategies and market insight within the Blackbox community. Blackbox is a SaaS company with a growing base of users that spans 42 countries; current subscription fees are $99.97 per month or $959.00 annually. For more information, go to: www.blackboxstocks.com .

Joe Gomes, Managing Director – Generalist Analyst, Noble Capital Markets, Inc.

Joshua Zoepfel, Research Associate, Noble Capital Markets, Inc.

Refer to the full report for the price target, fundamental analysis, and rating.

Merger. Blackboxstocks announced its intent to merger with Evtec Group. BLBX shareholders are expected to retain 8.34% of the combined company’s common stock post-merger. While details of the transaction are limited, management believes the transaction will provide significant and long-term value for BLBX shareholders. Blackboxstocks will operate as a subsidiary of Evtec. In its just filed 10-K for 2022, the Company noted it was exploring strategic alternatives.

Who Is Evtec Group? A private U.K.-based company, Evtec Group is a leading parts supplier for luxury brands in the EV and performance automotive market. The acquisition of Blackboxstocks provides Evtec with a pathway to become publicly traded in the U.S., while enabling Blackboxstocks access to capital needed to take the next step forward in its business, in our view.


Get the Full Report

Equity Research is available at no cost to Registered users of Channelchek. Not a Member? Click ‘Join’ to join the Channelchek Community. There is no cost to register, and we never collect credit card information.

This Company Sponsored Research is provided by Noble Capital Markets, Inc., a FINRA and S.E.C. registered broker-dealer (B/D).

*Analyst certification and important disclosures included in the full report. NOTE: investment decisions should not be based upon the content of this research summary. Proper due diligence is required before making any investment decision. 

Regulate AI?  Elon Musk Thinks it’s an Intelligent Idea

Image Credit: Steve Jurvetson (Flickr)

Elon Musk Unveils How He Expects to Approach Artificial Intelligence

The CEO of SpaceX, Twitter, Tesla, as well as the founder of The Boring Company , and Neuralink, says he wants to do something to serve humanity. Elon Musk has been concerned that artificial intelligence may have the propensity to turn against mankind. He said the best way to avoid the problem is to make artificial intelligence curious. “I’m going to start something which I call ‘TruthGPT’ or a maximum truth-seeking AI that tries to understand the nature of the universe,” Musk said in an interview with Tucker Carlson. The billionaire thinks that an AI that cares about understanding the universe is “unlikely to annihilate humans” as we’re an “interesting part of the universe, hopefully.” During the discussion, he emphasized the project will differ from competitors, such as OpenAI’s ChatGPT and Google’s Bard, by caring about understanding the universe.

This ambitious new goal of Musk’s was introduced with few details about the project, so it remains unclear how, exactly a machine becomes curious. He did repeat that he considers AI dangerous if mismanaged, with a “potential for civilizational destruction.” In fact, he called for some level of government oversight over AI projects. Musk isn’t new to the technology; he is actually one of the co-founders of OpenAI, the company that has been making headlines with its AI chatbot named ChatGPT.

The new technology would likely compete with AI efforts by Sam Altman-led OpenAI, which as mentioned was initially funded by Musk, Google’s DeepMind, and other AI initiatives around the world.

Regulating A.I.

Musk told Carlson he envisions a regulatory agency that “initially seeks insight into AI, then solicits opinion from industry, and then has proposed rule-making,” something like the Federal Aviation Administration and how it interacts with aviation and aerospace companies. Once agency and industry-accepted rules in place, “I think we’ll have a better chance of advanced AI being beneficial to humanity,” Musk said. Musk signed a letter calling for a pause on advanced AI research because he is part of a group of signers that believe it can potentially harm society.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter stated.

Part two of the interview is scheduled to air at 8 PM ET April 18 on Fox News.

Paul Hoffman

Managing Editor, Channelchek

https://www.foxnews.com/media/elon-musk-develop-truthgpt-warns-civilizational-destruction-ai

https://www.cnbc.com/2023/04/18/musk-calls-plans-truthgpt-ai-to-rival-openai-deepmind.html

What Americans Really Think of Cryptocurrency

Image Credit: Duncan Rawlinson (Flickr)

Does the News Chatter Surrounding Cryptocurrencies Match the Interest in the Asset Class?

Over the 14 years since bitcoin sprung to life, expectations have ranged from overwhelming enthusiasm over its possibilities to fear of the risks inherent in an, as yet, not integrated payment method. A recent 50% run up in bitcoin has refired up the believers, but the most heard about crypto is still valued at less than half of its high point. Issues beyond volatility that cause some to disregard cryptocurrencies as a payment method are regulatory threats, the environmental cost of mining, and failed exchanges. During the week March 13-19, Pew Research Center conducted a survey measuring usage, confidence, and investment success. The survey is important for those paying attention to crypto as it cuts through our personal opinions and offers less biased statistics.

Survey Says…

Most Americans, 88% have heard of cryptocurrency. Almost 40% of those that are aware of crypto told surveyors they are not at all confident in the reliability and safety of crypto, with an additional 36% not very confident. Of the results for those that responded that they are extremely confident the result is 4%, and 2% as very confident.   Of those that have heard of it, 18% say they are somewhat confident.

Digital technology is shown to be less embraced with age. Although the current concern for crypto is high, some age groups have a greater concern than others. This is reflected in that those 50 and older who know about cryptocurrency and are more inclined to say 85% they are not confident in its reliability and safety. Compare this to those adults 49 and younger, where the figure drops to 66%.   

Does sex play a role in skepticism toward cryptocurrencies? 80% of women say they are not confident in it, compared with 71% of men out of the 88% that have heard of crypto.

Does experience lead to acceptance, or acceptance lead to experience? For those that invested in one or more digital currencies, 20% say they are extremely or very confident that it is safe and reliable. For those that have no experience investing in it, the slice drops to 2%. It is worth understanding that of the group that has had experience with crypto, 43% still  responded that they are not very or not at all confident in it.

Cryptocurrency Usage in the U.S.

Younger males are more likely to use cryptocurrency compared with men 50 and older and women overall. The number of men 18-29 that have used crypto is more than double that of woman of the same age, 41% of men ages 18 to 29 compared with 16% of women in the same age range.

Adults with upper incomes that have used crypto totaled 22%, with middle incomes slightly less at 19%. Lower incomes that have ever invested in, traded or used cryptocurrency compared at 13%.  

Few that have invested in or transacted using cryptocurrency used it for the first time within the past year. Pew Research asked when they first used cryptocurrency, 74% of those who have ever invested in, traded, or used cryptocurrency say they did for the first time one to five years ago. Only 16% say they first did this within the past year, and 10% more than five years ago.

For college graduates, 25% and those with some college experience, 20% showed they were more likely than those with just a high school education or less, 10% to answer that their cryptocurrency investments hurt their personal finances.

Results of Investment

Of those that have invested in crypto, 15% say their investments have done better than expected, 32% say they have done about the same as expected and 7% are unsure. 19% of cryptocurrency users say the investments have hurt their personal finances at least a little.

Most users, 45% indicated their investments performed worse than expected.

Measuring the impact the speculation had on users’ personal finances, three-in-five users (60%) say that they have neither helped nor hurt. Roughly equal shares say that these investments have helped (20%) or hurt (19%) their finances. Just 7% say cryptocurrency has helped their finances a lot and 3% say it has hurt a lot. ­

Take Away

There seems to be far more noise reporting cryptocurrencies than activity or actual usage. This could mean a number of things. One could read into this that the asset’s potential when the fear lifts are high and the potential includes a large percentage of those that are now keeping away. The argument suggests that the ongoing dramatic headlines are warranted since once the potential is realized, there could be much greater movement than we have already seen. Bitcoin had once gone from pennies to $68,000 $USD. Another reason for so much news coverage for an asset class that is favored is it is still novel, so we are all evaluating the asset class as investors; since we’re showing interest or intrigue, news services will report on it to gain audience. If we turn our attention elsewhere, that is then what we will hear more about.

It is truly a speculative asset class with little history. While some are betting everything on crypto, far more are currently just spectators on the sidelines. The hype and attention it is currently receiving may not match actual investor interest.

Paul Hoffman

Managing Editor, Channelchek

Source

https://www.pewresearch.org/wp-content/uploads/2023/04/sr_2023.4.10_crypto_topline.pdf

Twitter is Now Seated with eTORO, Which is a Breakthrough Expansion for Both

Image Credit: Web Summit (Flickr)

Elon Musk Announces New Financial Functionality on Twitter

Starting today, Twitter will provide tweeters the ability to buy and sell stocks and crypto on its platform via eTORO. Twitter owner, Elon Musk has been indicating he intends to turn the popular micro-blogging platform into a “super app.” Today’s move shows substantial headway in allowing financial transactions to be conducted on the social media platform. Other company goals since Musk’s purchase of the company include ride hailing, and attracting video influencers that may be disenchanted with YouTube restrictions on speech.  

What Will the Twitter eTORO Partnership Provide?

Founded in 2007, eTORO has become one of the largest social investment networks and trading platforms. According to its website, it is “built on social collaboration and investor education: a community where users can connect, share, and learn.”

Twitter will partner with the platform to allow users (known as tweeters and Twitterers) to trade stocks and cryptocurrencies as part of a deal with the social investing company.

This partnership will provide access to view charts and trade stocks, cryptocurrencies, and other investment assets from eToro via its mobile platform. Together this significantly expands real-time trading data available to users who already have access on Twitter to real-time data, however this arrangement adds all the bells and whistles a modern trading app can provide.

Twitter will be expanding its use of cashtags as well. Twitter added pricing data for $Cashtags (company ticker preceded by “$”) in December 2022. Since January, there have been more than 420 million searches using Cashtags – the number of searches averages 4.7 million a day.

eToro CEO Yoni Assia told CNBC the deal will help better connect the two brands, adding that in recent years its users have increasingly turned to Twitter to “educate themselves about the markets.”

Assia said there is a great deal of “very high quality” content available in real-time and that the partnership with Twitter will help eToro expand to reach new audiences tapping this as a source of information.

Update on Elon

After Musk’s purchase of Twitter, many advertisers stepped back and watched to see how far the company would go to allow less moderated interaction. On Wednesday (April 12) Musk said that “almost all” advertisers had returned to the app. However, Stellantis and Volkswagen, two large competitors with Musk run Tesla, said they do not yet plan to resume advertising.

Musk told a Morgan Stanley conference last month he wants Twitter to become “the biggest financial institution in the world.” This begs those that follow Musk to ask, “Why stop there, why not include Mars?”

What Else

Be sure to follow Channelchek on Twitter (@channelchek) to stay up to date on market insights, news, videos, and of course, top-tier investment analyst research on small and microcap opportunities.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.etoro.com/en-us/about/

https://www.cnbc.com/2023/04/13/twitter-to-let-users-access-stocks-crypto-via-etoro-in-finance-push.html?__source=iosappshare%7Ccom.apple.UIKit.activity.PostToTwitter

https://www.forbes.com/sites/roberthart/2023/04/13/twitter-will-let-users-buy-stocks-and-crypto-as-elon-musk-pushes-for-everything-app/?sh=332662a26882

https://www.bloomberg.com/news/live-blog/2023-03-07/elon-musk-speaks-at-morgan-stanley-conference

Blackboxstocks (BLBX) – A Reverse Stock Split at 1-for-4


Tuesday, April 11, 2023

Blackboxstocks, Inc. is a financial technology and social media hybrid platform offering real-time proprietary analytics and news for stock and options traders of all levels. Our web-based software employs “predictive technology” enhanced by artificial intelligence to find volatility and unusual market activity that may result in the rapid change in the price of a stock or option. Blackbox continuously scans the NASDAQ, New York Stock Exchange, CBOE, and all other options markets, analyzing over 10,000 stocks and up to 1,500,000 options contracts multiple times per second. We provide our users with a fully interactive social media platform that is integrated into our dashboard, enabling our users to exchange information and ideas quickly and efficiently through a common network. We recently introduced a live audio/video feature that allows our members to broadcast on their own channels to share trade strategies and market insight within the Blackbox community. Blackbox is a SaaS company with a growing base of users that spans 42 countries; current subscription fees are $99.97 per month or $959.00 annually. For more information, go to: www.blackboxstocks.com .

Joe Gomes, Managing Director – Generalist Analyst, Noble Capital Markets, Inc.

Joshua Zoepfel, Research Associate, Noble Capital Markets, Inc.

Refer to the full report for the price target, fundamental analysis, and rating.

A Ratio Set. Yesterday, Blackboxstocks announced that the Company has filed an amendment to the Company’s articles of incorporation with the Nevada Secretary of State to set a Reverse Stock Split ratio of one-for-four. The amendment took effect on April 10, 2023 at 4:01 p.m. Eastern Daylight Time, and split-adjusted basis trading begins on April 11, 2023. The exchange agent for the split will be Securities Transfer Corporation.

The Process. The amendment process to the articles was started last month when the Board of Directors of the Company adopted resolutions advising and recommending to stockholders to approve a reverse stock split of one-for-seven. The stockholders voted to approve the split and amendment in the same month. The Board later approved the split ratio to be at one-to-four on April 7, 2023.


Get the Full Report

Equity Research is available at no cost to Registered users of Channelchek. Not a Member? Click ‘Join’ to join the Channelchek Community. There is no cost to register, and we never collect credit card information.

This Company Sponsored Research is provided by Noble Capital Markets, Inc., a FINRA and S.E.C. registered broker-dealer (B/D).

*Analyst certification and important disclosures included in the full report. NOTE: investment decisions should not be based upon the content of this research summary. Proper due diligence is required before making any investment decision. 

Unexpected Adjustments Among Today’s Self-Directed Investors

Image Credit: Focal Foto (Flickr)

How Decision-Making and Market Impact is Shifting for Retail Investors

Retail investors’ preferences change over time. This impacts sector strength and the overall direction of markets. Even the methods of interacting with exchanges change as newer products like trading apps, artificial intelligence, and exchange-traded products (ETP) become available.

The influence retail has is growing, and anecdotally shifting preferences happen more quickly. Within this category, there are self-directed investors with different knowledge bases and at different stages of their lives. As people move through different stages, their concerns, outlooks, and risk tolerances adjust. Nasdaq just published its second annual survey of retail investors to measure how their interests are changing and what impact that may have. The survey of 2,000 investors from Gen Z to Baby Boomers uncovered some surprising trends in decision-making, fears, comfort zones, and asset class preferences.

Generational Groupings

There were a number of commonalities exposed by the Nasdaq survey between the different generations. They all listed their greatest concerns to be inflation and recession, but while the youngest (Gen Z, born 1997 – 2012) found housing and real estate a deep concern, the oldest group (Baby Boomers, born 1946 – 1964) are more concerned about tax rate changes. The generations in the middle (Gen X born 1965 –1980) and (Millennials born 1981 – 1996) show a greater concern over interest rate changes.

The survey question sought to understand how much time investors in each generation spent researching buy and sell investment decisions. Of Gen Z, on average 48% spent less than an hour, while 3% of these younger adults evaluated the transaction for at least a month. The next age category, Millennials, spent a bit more time on diligence. Only 28% would buy or sell with less than an hour of thought put into the transaction. Of this group, 4% took a month or longer to decide. This trend toward more time researching research continued as the survey reveals the Gen X greater propensity to spend more time evaluating before a purchase. Only 15% would press the buy or sell button with less than an hour spent understanding the investment – 7% of Gen X investors say they take a month or longer.

A big difference between the youngest and the oldest, is that among the Gen Z investors, although almost half said they spend fewer than 60 minutes researching, 0% said they did not research at all. Of the Baby Boomers surveyed, 24% indicated they spend no time researching before they buy or sell. It’s unclear if this is because the older group is less tech savvy, hires a professional to do the research, or believes they have the knowledge to move without digging deeper.

Overlap in Generational Preferences

Data Sources: Nasdaq

Other Trends

Despite their top concerns listed as recession and inflation, 71% of Gen Z and 50% of Millennials say they are investing more aggressively. This is in stark difference to the 9% of Boomers and 20% of Gen X describing their strategies as more aggressive than the previous year.

The influence of Twitter, Facebook and even TikTok keeps expanding. 73% of Gen Z use TikTok as a source for investment information. This is an 18% increase from the prior year. Baby boomer TikTok investment use rose by 16% to its current 25%.

The investment themes from year-to-year show ESG and crypto interest sinking, while robotics and other autonomous technology is where the focus has increased most. Younger investors are more active in their investments than before, and more frequently conducting their own research ahead of transacting. Investors of all ages are more likely to consider alternative options than they had before, these could include options, cryptocurrencies, exchange traded products, etc.

Competition among brokerage platforms is as fierce as it is in any innovative, tech heavy industry. The availability of advanced technology and commission-free trading have made investing more accessible, especially for the younger investors.

Take Away

The second annual survey conducted by Nasdaq indicates that the retail investor growth and power we’ve experienced in recent years was not a fad, it is growing and becoming more sophisticated. They are more influential and should be understood as they are here to stay. This is expected to continue to disrupt and influence markets dramatically.

As retail trends take a higher position of importance in defining the day-to-day challenges of investing and mapping the markets’ future, these self-directed investors are finding more services to accommodate them. One source is the Channelchek platform where retail and institutional investors, of all ages can review research reports, absorb video discussions with management of interesting opportunities, expand understanding through daily articles, and, if relevant, attend a roadshow to meet a particular company’s management.  

Signup for Channelchek emails and full access here.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.nasdaq.com/articles/retail-revival%3A-how-a-year-of-market-volatility-reshaped-investor-strategies

https://nd.nasdaq.com/GENZ

https://nd.nasdaq.com/Millennials

https://nd.nasdaq.com/GENX

https://nd.nasdaq.com/BabyBoomers

About the Bitcoin to $1 Million by Summer 2023 Wager

Image Credit: Fortune Brainstorm TECH (Flickr)

Are Balaji Srinivasan and Cathie Wood Right About the Future Value of Bitcoin?

The former Chief Technology Officer (CTO) of Coinbase, is either extremely bullish on Bitcoin, or has other reasons for his tweet that had set off a huge price jump in the cryptocurrency. Balaji Srinivasan is a very influential investor, especially in the tech space. He confirmed last Friday, belief in a bet he made in March that within 90 days, bitcoin would reach $1 million in value per token. At stake in the bet is $2 million. For crypto investors trying to understand the strong conviction going into the wager, they may first need to understand the person behind the tweet.

Who is Balaji Srinivasan

The Indian-born, U.S. raised, tech entrepreneur, investor, and academic has a Ph.D. in Electrical Engineering and an MS in Chemical Engineering from the Massachusetts Institute of Technology (MIT). Srinivasan co-founded a number of startups, including Earn.com which is blockchain payments platform, and the genomics company Counsyl. He has worked as a General Partner at a prominent Silicon Valley venture capital firm, and as the Chief Technology Officer at the crypto exchange, Coinbase. 

Srinivasan has a large following as a commentator on the subject of technology and its social and political implications. Popular topics of his numerous articles and talks include the future of technology, the rise of decentralized systems, and the potential impact of emerging technologies on society. The tech guru has lectured at Stanford University and has served as an advisor to the FDA and the World Economic Forum.

Twitter: @balajis

What is Behind this Forecast?

In an ARK Invest podcast last Friday (April 6), Srinivasan explained bitcoin has good momentum and that he still believes it will reach $ 1 million within a three-month time horizon. He cited the concerns over the regional banking crisis that he believes will destabilize the dollar and cause the Fed to dump more dollars into the system. Fear and inflation in the coming months is the driver. Cathie Wood agreed with the direction and potential for bitcoin to hit $1 million, but her reasons were a bit different. She believes fear will be one driver, but reiterated her call for deflation. “We are very positive about Bitcoin as well. But your forecast was in the context of hyperinflation associated with fiat currencies. Our optimism is more of a function of fears of deflation and counter-party risk. Both of those should accrue to Bitcoin’s benefit,” Wood explained in her company’s podcast.

The bet and the likelihood that bitcoin-will-hit-$1-million-by-summer prediction seems on the surface to be highly improbable. It would take immense capital flows into the cryptocurrency and there is doubt the exchanges would be able to handle the migration of assets. Also, the question of what would prompt the run from traditional currency to cause a skyrocketing bitcoin, has still not been satisfactorily defined.

The one-hour and 17-minute podcast available at the link below under “Sources” is nonetheless thought provoking. These are two well-regarded tech analysts, standing behind something that sounds outlandish.

Another possible explanation for his outward conviction is that this isn’t a risky bet for Balaji. He’s presumed to own a considerable amount of bitcoin. The tick up on news of his bet (bitcoin is up near 25% since his tweet) could more than offset a $2 million loss on the wager. The timing of the value increase in BTC makes it appear that any loss could be self-funded by the attention it may have given the cryptocurrency.

Take Away

Bitcoin is higher than it had been when tech guru Balaji Srinivasan placed his public wager. However, at $28,500 it would still have to rise by $971,500. over the next few months. Supporting the idea that bitcoin is going up substantially, are two tech and disruption gurus whose thoughts are worth considering alongside your own observations.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://ark-invest.com/podcasts/

https://www.coindesk.com/consensus-magazine/2023/04/01/balaji-srinivasans-1m-bitcoin-bet-could-be-right-but-i-hope-hes-wrong/

The FDA’s Action Plan Regarding Artificial Intelligence and Machine Learning

Image Credit:  Interscatter Data Sharing Contact Lens, UW News (Flickr)

The Challenges Surrounding AI/ML are Taken Head on by the FDA

Should artificial intelligence or machine learning (AI/ML) be allowed to alter FDA approved software in medical devices? If so, where should the guardrails be set? The discussions and debates surrounding AI/ML are heated; some believe the technology may destroy humanity, while others look forward to the speed of advancement it will allow. The FDA is getting out ahead on this debate. This week the agency drafted a list of “guiding principles” intended to begin developing best practices for machine learning within medical devices.

Background

The FDA views its role as protecting patients while at the same time avoiding standing in the way of progress. In the case of ML, not preventing the modification of medical treatments or procedures that would improve outcomes. AI/ML has the potential to more quickly evaluate data sets, improve diagnosis, adjust how used, and overall alter processes based on what is learned.  

On April 3, the FDA drafted AI-Enabled Medical Device Life Cycle Plan Guidance, with a comment period ending July 3, 2023.  The U.S. regulator’s proposal attempts to find science-based requirements for medical devices powered by artificial intelligence and machine learning. The overall goal is to not slow the implementation of improved new devices that may quickly be modified, updated, and rapidly deliver an improved response to new data.  

Greg Aurand, Senior Healthcare Services & Medical Devices Analyst at Noble Capital Markets, summed up the purpose for the FDA’s actions in this way: “The FDA needs to move cautiously, but they don’t wish to slow down healthcare improvements on an ongoing basis.” Aurand gave examples where machine learning has the potential to make better assessments, better decipher data sets such as antibiotic resistance, and improve results while perhaps taming medical expenses. He said, “new draft guidelines from the FDA should make it easier for approval of modifications to occur so previously unrecognized improvements may occur within the guidelines, and the process is less static.”

How is Artificial Intelligence Likely to Revise Medical Devices?

As is written into the FDA guidance, “Artificial intelligence (AI) and machine learning (ML) technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day. Medical device manufacturers are using these technologies to innovate their products to better assist health care providers and improve patient care.”  

The FDA accepts that a great benefit of AI/ML in software is its ability to learn from real-world use and experience, then the ability to improve its own performance.

How is the FDA Expected to Regulate AI/ML Devices?  

Traditionally, the FDA reviews medical devices and improvements through a premarket pathway for approval. The FDA may also review and clear modifications to medical devices, including software as a medical device, depending on the significance or risk posed to patients by that modification. The industry is going through a paradigm shift which the FDA is helping to enable.

The FDA’s current paradigm of medical device regulation was not designed for adaptive artificial intelligence. Under the FDA’s current approach to software modifications it anticipates that many of these artificial intelligence and machine learning-driven software changes to a device need a premarket review. The new regulation is expected to create broader parameters of pre-approval to allow adjustments with set allowable boundaries.

A new framework envisioned by the FDA includes a “predetermined change control plan” in premarket submissions. This plan would include the types of anticipated modifications, referred to as “Software as a Medical Device Pre-Specifications”.  The associated methodology used to implement those changes in a measured and controlled approach that manages risk the FDA calls the “Algorithm Change Protocol.”

Take Away

Artificial intelligence will transform many industries, and while some want to hit the pause button on progress, the FDA is trying to define how much control can be left to machine learning. The Guidance released in April with a three-month comment period is expected to allow medical equipment and software designers to progress into the unknown, with all stakeholders having as their goal better outcomes for patients.

If you wish to send requested comments to the FDA, the agency requests it be received by July 3, 2023 to ensure the agency considers your comment on the draft guidance before it begins work on the final version of the guidance.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial

https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device

https://www.fda.gov/media/145022/download

https://www.fda.gov/media/166704/download

Deep Fakes and the Risk of Abuse

Image Credit: Steve Juvetson (Flickr)

Watermarking ChatGPT and Other Generative AIs Could Help Protect Against Fraud and Misinformation

Shortly after rumors leaked of former President Donald Trump’s impending indictment, images purporting to show his arrest appeared online. These images looked like news photos, but they were fake. They were created by a generative artificial intelligence system.

Generative AI, in the form of image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA, has exploded in the public sphere. By combining clever machine-learning algorithms with billions of pieces of human-generated content, these systems can do anything from create an eerily realistic image from a caption, synthesize a speech in President Joe Biden’s voice, replace one person’s likeness with another in a video, or write a coherent 800-word op-ed from a title prompt.

This article was republished with permission from The Conversation, a news site dedicated to sharing ideas from academic experts. It represents the research-based findings and thoughts of, Hany Farid, Professor of Computer Science, University of California, Berkeley.

Even in these early days, generative AI is capable of creating highly realistic content. My colleague Sophie Nightingale and I found that the average person is unable to reliably distinguish an image of a real person from an AI-generated person. Although audio and video have not yet fully passed through the uncanny valley – images or models of people that are unsettling because they are close to but not quite realistic – they are likely to soon. When this happens, and it is all but guaranteed to, it will become increasingly easier to distort reality.

In this new world, it will be a snap to generate a video of a CEO saying her company’s profits are down 20%, which could lead to billions in market-share loss, or to generate a video of a world leader threatening military action, which could trigger a geopolitical crisis, or to insert the likeness of anyone into a sexually explicit video.

Advances in generative AI will soon mean that fake but visually convincing content will proliferate online, leading to an even messier information ecosystem. A secondary consequence is that detractors will be able to easily dismiss as fake actual video evidence of everything from police violence and human rights violations to a world leader burning top-secret documents.

As society stares down the barrel of what is almost certainly just the beginning of these advances in generative AI, there are reasonable and technologically feasible interventions that can be used to help mitigate these abuses. As a computer scientist who specializes in image forensics, I believe that a key method is watermarking.

Watermarks

There is a long history of marking documents and other items to prove their authenticity, indicate ownership and counter counterfeiting. Today, Getty Images, a massive image archive, adds a visible watermark to all digital images in their catalog. This allows customers to freely browse images while protecting Getty’s assets.

Imperceptible digital watermarks are also used for digital rights management. A watermark can be added to a digital image by, for example, tweaking every 10th image pixel so that its color (typically a number in the range 0 to 255) is even-valued. Because this pixel tweaking is so minor, the watermark is imperceptible. And, because this periodic pattern is unlikely to occur naturally, and can easily be verified, it can be used to verify an image’s provenance.

Even medium-resolution images contain millions of pixels, which means that additional information can be embedded into the watermark, including a unique identifier that encodes the generating software and a unique user ID. This same type of imperceptible watermark can be applied to audio and video.

The ideal watermark is one that is imperceptible and also resilient to simple manipulations like cropping, resizing, color adjustment and converting digital formats. Although the pixel color watermark example is not resilient because the color values can be changed, many watermarking strategies have been proposed that are robust – though not impervious – to attempts to remove them.

Watermarking and AI

These watermarks can be baked into the generative AI systems by watermarking all the training data, after which the generated content will contain the same watermark. This baked-in watermark is attractive because it means that generative AI tools can be open-sourced – as the image generator Stable Diffusion is – without concerns that a watermarking process could be removed from the image generator’s software. Stable Diffusion has a watermarking function, but because it’s open source, anyone can simply remove that part of the code.

OpenAI is experimenting with a system to watermark ChatGPT’s creations. Characters in a paragraph cannot, of course, be tweaked like a pixel value, so text watermarking takes on a different form.

Text-based generative AI is based on producing the next most-reasonable word in a sentence. For example, starting with the sentence fragment “an AI system can…,” ChatGPT will predict that the next word should be “learn,” “predict” or “understand.” Associated with each of these words is a probability corresponding to the likelihood of each word appearing next in the sentence. ChatGPT learned these probabilities from the large body of text it was trained on.

Generated text can be watermarked by secretly tagging a subset of words and then biasing the selection of a word to be a synonymous tagged word. For example, the tagged word “comprehend” can be used instead of “understand.” By periodically biasing word selection in this way, a body of text is watermarked based on a particular distribution of tagged words. This approach won’t work for short tweets but is generally effective with text of 800 or more words depending on the specific watermark details.

Generative AI systems can, and I believe should, watermark all their content, allowing for easier downstream identification and, if necessary, intervention. If the industry won’t do this voluntarily, lawmakers could pass regulation to enforce this rule. Unscrupulous people will, of course, not comply with these standards. But, if the major online gatekeepers – Apple and Google app stores, Amazon, Google, Microsoft cloud services and GitHub – enforce these rules by banning noncompliant software, the harm will be significantly reduced.

Signing Authentic Content

Tackling the problem from the other end, a similar approach could be adopted to authenticate original audiovisual recordings at the point of capture. A specialized camera app could cryptographically sign the recorded content as it’s recorded. There is no way to tamper with this signature without leaving evidence of the attempt. The signature is then stored on a centralized list of trusted signatures.

Although not applicable to text, audiovisual content can then be verified as human-generated. The Coalition for Content Provenance and Authentication (C2PA), a collaborative effort to create a standard for authenticating media, recently released an open specification to support this approach. With major institutions including Adobe, Microsoft, Intel, BBC and many others joining this effort, the C2PA is well positioned to produce effective and widely deployed authentication technology.

The combined signing and watermarking of human-generated and AI-generated content will not prevent all forms of abuse, but it will provide some measure of protection. Any safeguards will have to be continually adapted and refined as adversaries find novel ways to weaponize the latest technologies.

In the same way that society has been fighting a decadeslong battle against other cyber threats like spam, malware and phishing, we should prepare ourselves for an equally protracted battle to defend against various forms of abuse perpetrated using generative AI.

One Stop Systems (OSS) – A Transitional 2023 Sets Up 2024 For Growth


Friday, March 24, 2023

One Stop Systems, Inc. (OSS) designs and manufactures innovative AI Transportable edge computing modules and systems, including ruggedized servers, compute accelerators, expansion systems, flash storage arrays, and Ion Accelerator™ SAN, NAS, and data recording software for AI workflows. These products are used for AI data set capture, training, and large-scale inference in the defense, oil and gas, mining, autonomous vehicles, and rugged entertainment applications. OSS utilizes the power of PCI Express, the latest GPU accelerators and NVMe storage to build award-winning systems, including many industry firsts, for industrial OEMs and government customers. The company enables AI on the Fly® by bringing AI datacenter performance to ‘the edge,’ especially on mobile platforms, and by addressing the entire AI workflow, from high-speed data acquisition to deep learning, training, and inference. OSS products are available directly or through global distributors. For more information, go to www.onestopsystems.com.

Joe Gomes, Managing Director – Generalist Analyst, Noble Capital Markets, Inc.

Joshua Zoepfel, Research Associate, Noble Capital Markets, Inc.

Refer to the full report for the price target, fundamental analysis, and rating.

4Q22 Results. Revenue of $18.2 million, up 2.7% y-o-y, but about $1 million below expectations as the Disguise business was weaker than expected. We had forecast $19 million. Driven by one-time items, OSS reported a GAAP net loss of $3.3 million, or a loss of $0.16/sh in the quarter, compared to a loss of $386,243, or a loss of $0.02/sh per share last year. We had forecast net income of $0.4 million, or $0.02 per share.

Military Opportunities Expanding. OSS is now engaged with eight of the top 10 largest military prime contractors in the U.S., with multiple prime contractor bids to the DOD using OSS products. OSS has won two new military programs already in 2023, with eight more in the pipeline. 


Get the Full Report

Equity Research is available at no cost to Registered users of Channelchek. Not a Member? Click ‘Join’ to join the Channelchek Community. There is no cost to register, and we never collect credit card information.

This Company Sponsored Research is provided by Noble Capital Markets, Inc., a FINRA and S.E.C. registered broker-dealer (B/D).

*Analyst certification and important disclosures included in the full report. NOTE: investment decisions should not be based upon the content of this research summary. Proper due diligence is required before making any investment decision. 

Block Inc. Versus Hindenberg Research, Who’s Correct?

Image Credit: Hindenberg Research (YouTube)

The Details of the Hindenberg Research Report Include Serious Allegations

A legal face-off may be brewing as Block (SQ), the other company co-founded by Jack Dorsey, calls on the SEC for what Block calls an “inaccurate report.” The report Block (formerly Square) is referring to was released by Hindenberg Research on March 23. The research contends that Dorsey’s fintech company showed, “willingness to facilitate fraud against consumers and the government, avoid regulation, dress up predatory loans and fees as revolutionary technology, and mislead investors with inflated metrics.”

What is each side claiming, and what is the responsibility in releasing a report that may take Hindenberg into a fight with a company with a $44 billion market cap?

Who’s Involved?

Block is a financial technology company specializing in mobile payments founded in 2009 by Jack Dorsey and Jim McKelvey. The company’s flagship product is a small, square-shaped credit card reader that plugs into a smartphone or tablet and allows businesses to accept credit and debit card payments. Block has added other financial products and services, including point-of-sale software, payroll processing, and business loans.

Hindenburg Research provides investors with investigative research and analysis for the purpose of helping them identify potential risks or fraudulent practices in publicly traded companies. They are described as a short-selling, research-based firm. The Research is often considered within the context of its short-position investment strategy.

Image: Block’s flagship product – Nat’l Museum of American History Smithsonian Institution (Flickr)

What is Hindenberg’s Claim?

The research firm with a reputation of looking below the surface for trouble at firms, says Block is not what it claims to be. According to the Hindenberg report, the Dorsey-founded firm claims to have developed a frictionless and magical financial technology. The mission of this technology, the report quotes Block as saying is to empower the “unbanked” and the “underbanked.”

Hindenberg says that over two years of investigation that involved dozens of interviews with former employees that Block has systematically taken advantage of the demographics it claims to be helping. This refers to the stated mission of helping the underbanked. Instead, the research firm says this stands in conflict with, “the company’s willingness to facilitate fraud against consumers and the government, avoid regulation, and dress up predatory loans and fees as revolutionary technology, and mislead investors with inflated metrics.”

The two years of investigation also indicated that  Block severely overstated its user counts and has understated its customer acquisition costs. This information, the report says, is based on former employees’ estimation that 40%-75% of accounts they reviewed were fake, involved in fraud, or were additional accounts tied to a single individual.

They claim a key metric that investors use to value the company are unclear. That is, how many individuals are on the Cash App. The report accuses the company reporting of misleading “transacting active” metrics filled with fake and duplicate accounts. Hindenberg says, “Block can and should clarify to investors an estimate on how many unique people actually use Cash App.”

Hindenberg said the app is used for illegal activity and points to all the rap songs written about engaging in illegal activity, activity made possible with the help of the app. The research company even made a compilation video to demonstrate this point (link to video under “Sources” below).

A line in one of the songs is, “I paid them hitters through Cash App.” Heritage contests that Block paid to promote the video for the song called “Cash App” which described paying contract killers through the app. The song’s artist was later arrested for attempted murder.

According to the Hindenberg report, Block’s Cash App was also cited “by far” as the top app used in reported U.S. sex trafficking, according to a leading non-profit organization. Multiple Department of Justice complaints outline how Cash App has been used to facilitate sex trafficking, including sex trafficking of minors.

Beyond alleged facilitation of payment for crimes, the platform, former employees contend,  is overrun with scam accounts and fake users. Examples of obvious distortions of user numbers is that “Jack Dorsey” has multiple fake accounts, including some that appear aimed at scamming Cash App users.  “Elon Musk” and “Donald Trump” who have dozens of accounts in their names. Hindenberg contends they tested this flaw, “we ordered a Cash Card under our obviously fake Donald Trump account, checking to see if Cash App’s compliance would take issue—the card promptly arrived in the mail,” they gave as an example.

Block’s Response

Not to be dissed, management at Block called out the threatening press release. “We intend to work with the SEC and explore legal action against Hindenburg Research for the factually inaccurate and misleading report they shared about our Cash App business today.”

The Dorsey founded firm suggested that the research firm wrote the report for dubious reasons and that it may be part of an orchestrated reverse pump and dump, “Hindenburg is known for these types of attacks, which are designed solely to allow short sellers to profit from a declined stock price. We have reviewed the full report in the context of our own data and believe it’s designed to deceive and confuse investors.”

The company than comforted stakeholders saying, “we are a highly regulated public company with regular disclosures, and are confident in our products, reporting, compliance programs, and controls. We will not be distracted by typical short seller tactics.”

There’s Smoke, is There Fire?

Are the initial disparaging claims against Block’s business accurate? Is there merit to what Block says of Hindenberg Research? As Block may be seeking a legal remedy, it is unlikely that either party will be very vocal from here.

For investors, it’s logical that both parties cannot be right at the same time. One of the parties is overstating truth. If Block is indeed working with the SEC, this truth should eventually surface.

Paul Hoffman

Managing Editor, Channelchek

Sources

https://youtu.be/StjWk3Mj-M4?t=8

https://hindenburgresearch.com/

https://investors.block.xyz/news/news-details/2023/Blocks-Response-to-Inaccurate-Short-Seller-Report/default.aspx