How does Super Micro (SMCI) help us to pick AI stock winners?
Welcome to AI Collision š„,
In todayās collision between AI and our world:
Up down, nobody knowsā¦
Groq vs Grok
Which AI have you used?
If thatās enough to get the Groqs (or Groks) fighting, read onā¦
AI Collision š„ how do you tell if an AI stock is worth picking or not?
My morning routine usually starts at 5.30am to 6.00am.
And no, this isnāt one of those ālife hackā posts where I tell you Iāve done six workouts and eaten 2kg of steak before 7am.
My day starts that early because at least one of my kids is standing beside my bed, like a silent serial killer, waiting for one eye to open so they can go and play.
With no other choice, Iām up, bleary eyed, them smashing some milk and a banana and me checking in to see whatās been going down in the markets overnight.
Hence, my usual first port of call is a data aggregation site like Yahoo Finance where I can catch up on closing prices and post-market trading in the US, whatās been moving in Australia and Asia and then mainlining coffee right into my system.
As I did that yesterday morning, I checked in on a few ādarlingā stocks of the market.
One of those being Super Micro Computer Inc. (NASDAQ:SMCI).
When I checked in, it wasnāt the price action that caught my attention right away, it was some of the news feeds about SMCI that took my half-waking-up state into the realms of a dreamland.
Hereās the first thing I sawā¦
$1,300 per share would be 71% higher.
Then I saw thisā¦
60% higher would be $1,212.
Then thisā¦
And this came off the back of an intraday high of $1,077 on Friday 16 February to an intraday low of $692 on Tuesday 20 Februry. Thatās a 35% swing lower.
So, what is it? What does SMCI do next?
Well, the short answer is, short term, nobody bloody knows!
Will it retest over $1,000 in the next week, or will it sail 50% lower towards prices it was trading at only a month ago?
Every trader, broker and analyst will have a take on it. So how do you know who to listen to, who to trustā¦?
That includes me by the way!
I think the approach to take here is one that over time has proven for me to be most effective. That is to simply form a distinctive world view about what our world looks like in a decade, in two decades, and work backwards from there.
If you think that technology like AI will only get better, that weāll use more computationally intensive and heavy hardware and that we will be more connected, faster, with greater access to AI, then you just need to ask: what are the smart plays that build that world?
And then ask: is SMCI the kind of company that fits into that thesis?
But if you think AI is all hyped up, that an unforeseen competitor will come along and scoop up the market, or that the technology fundamentally isnāt going to have a big impact on our world, then you come to find your answers there pretty quick too.
I believe this is how you at least start with figuring out what AI stocks are worth tipping capital into or not. After youāve at least built a thesis or got some expert information and guidance to build your view (like the exceptionally great content here at AI Collision š„) then you look at those companies and see if they actually make money from the tech theyāve got going.
This is a tried and tested blueprint for picking AI stocks. Some call it qualitative, top-down investing. But for me, I think itās just a reverse jigsaw. When you know what the end puzzle looks like, then you start to figure out what the pieces are that make it so.
Itās not always right, but you donāt need to be right all the time. But when you see the pace and explosion of AI companies involved in it, like SMCI, then I think itās a puzzle worth figuring out.
AI gone wild š¤Ŗ
First, we had computer processing units (CPUs).
Then we had graphics processing units (GPUs).
Now weāve got language processing units (LPUs).
Well, we donāt āhaveā LPUs per-se, but we might. And thatās enough to have sent the AI world into a bit of a tizz this last few days.
Have you ever heard of Groq?
No, thatās not a spelling mistake. Weāre not talking about Grok, Elon Muskās AI that heās rolling out through X.com ā the Grok that we wrote about in November last year.
I mean Groq, the same Groq that apparently existed well before Elon Muskās ever did.
Hereās what Groq said when Elon made a song and dance about Grok:
Now weāve got the whole āwhose Groq (or Grok) came firstā out of the way ā clearly it was Groq ā the question remains, why is this important today?
Well, it comes back to the CPU, GPU and now LPU evolution.
Groq is claiming that itās developed an āAI chipā thatās exponentially faster than anything out there in the market right now. More specifically, the AI chips that Nvidia is making and that companies like Microsoft are using.
As Groq explains:
An LPU Inference Engine, with LPU standing for Language Processing Unitā¢, is a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component to them, such as AI language applications (LLMs).
In practice there are claims that when using LPUs, the speed at which computation occurs is as much as 13-times faster.
In a recent Gizmodo article the speed comparison was put to the test, finding:
Groq produces 247 tokens/second compared to Microsoftās 18 tokens/second, according to a third-party test from Artificial Analysis published last week. That means ChatGPT could run more than 13x as fast if it was running on Groqās chips.
I also decided to do my own real-world test between Groq and OpenAIās ChatGPT.
With ChatGPT (using GPT-4) I asked the question, āWho invented the TPU?ā
26 seconds later, I got the following answer.
The Tensor Processing Unit (TPU) was invented by Google. TPUs were designed specifically for accelerating machine learning tasks. They are custom-designed chips optimized to run TensorFlow, Google’s open-source machine learning framework, and are intended to deliver higher performance and efficiency than conventional hardware such as CPUs and GPUs for certain types of computations common in deep learning algorithms. Google announced TPUs in 2016 as part of their efforts to accelerate machine learning workloads and improve the performance of their artificial intelligence services.
I then went on to Groq, using the Llama 2 70B-4K model, and asked the same question, āWho invented the TPU?ā
1.46 seconds later I got the following answer:
The Tensor Processing Unit (TPU) was invented by Google. The first TPU was developed in 2013 by the Google Brain team, led by Andrew Ng, a prominent researcher and entrepreneur in the field of artificial intelligence. The team recognized the need for specialized hardware to accelerate the training of deep neural networks, which were becoming increasingly complex and computationally intensive.
Google developed the TPU specifically to address the challenges of training deep neural networks, which were taking weeks or even months to train on traditional CPUs and GPUs. The company designed the TPU to optimize the performance of deep neural network computations, and the first TPU was deployed in Google’s data centres in 2015.
Since then, TPUs have become widely adopted in various industries, and Google has continued to develop and improve the technology. In 2018, Google announced the Cloud TPU, which allowed customers to rent TPU resources in the cloud, making it easier for organizations to access the technology without having to invest in their own hardware.
In summary, the Tensor Processing Unit (TPU) was invented by the Google Brain team, led by Andrew Ng, to address the need for specialized hardware to accelerate the training of deep neural networks.
Groq gives a more in-depth answer whilst also doing it 17.8-times faster.
Thatās quite something.
Now, for me, thereās still a big question that remains.
These new LPUs, are these the real deal? Are we talking about a true game-changer piece of hardware here, or is this a lot of hype and the reality is itās the models, not the hardware, thatās making a difference here?
And amidst all of it, whatās the investible angle?
Right now, Iām not sure. Iām trying to figure that out. Ideally, if Groq was investible then itād be something worth considering, a big chunk of asymmetric risk on the tableā¦
But itās not. So, what impact does this have short term? My initial feeling is that itās another platform for which the wider industry leaps higher from.
That itās further confirmation that 2024 is going to be the year of AI hardware. That big tech, mid-tech and little-tech are all scrambling to develop, launch and get to market, game-changing AI hardware.
The demand for all this is not only strong but growing at an insatiable pace. I suspect weāll be hearing a lot more about Groq (with a Q, remember) in the near future as the idea of LPUs starts to grab hold of the market.
Ed note: by the way, I mentioned on Tuesday that my new AI briefing was coming soon. Itās actually going live next Tuesday. If you wanted to check that out, you will need to pre-register for it.
Again, to see what its all about just hit the button below and itāll take you where you need to go to find out more.
Boomers & Busters š°
AI and AI-related stocks moving and shaking up the markets this week. (All performance data below over the rolling week).
Boom š
Appen (ASX:APX) up 35%
Brainchip (ASX:BRN) up 29%
Cyngn (NASDAQ:CYN) up 18%
Bust š
Vicarious Surgical (NASDAQ:RBOT) down 12%
C3.ai (NASDAQ:AI) down 5%
Nvidia (NASDAQ:NVDA) down 3%
From the hive mind š§
There are a lot of elections happening in 2024 around the world. And itās shaping up that AI is going to be the hottest story about how elections are run; how the voters are influenced; and when losers lose, how these elections were ācorruptedā. AI and elections are something weāre going to hear a lot more about in 2024.
While on this topic of AI and deepfakery, another āgodfatherā of AI has published an open letter about the need for greater regulation on deepfakes. While AI and elections will be a hot topic, so will the regulation of AI.
Often at home when Iām looking for something (that happens to be right in front of me) and canāt find it, my wife refers to it as me doing a āboy lookā. I rebut that as itās just how Iām wired. And Iām now hoping that AI is going to prove my point exactly.
Artificial Polltelligence š³ļø
This weekās poll is about using AI models. Mainly have you used them and if so, which onesā¦
If your answer happens to be āotherā drop us a comment to tell us which one(s) so we can dive in ourselves to find out more.
Weirdest AI image of the day
Communist countries propaganda posters starring famous fictionnal characters ā r/Weirddallee
ChatGPTās random quote of the day
“Technology is best when it brings people together, but it is at its peak when it makes us forget it’s even there, seamlessly enhancing our lives without our conscious recognition.” ā Tatsuo Horiuchi
Thanks for reading, see you on Tuesday. And if youāre enjoying our work, please like, share and leave comments below,