Have you found any amazing uses for AI?

How’s this. From Perplexity.

IBM Watson is an advanced artificial intelligence system developed by IBM, primarily known for its ability to process and analyze natural language. The technology underlying Watson includes several key components:

  1. Natural Language Processing (NLP): Watson uses NLP to understand and interpret human language, allowing it to answer questions posed in natural language. This involves analyzing syntax and meaning to provide accurate responses[3][4].

  2. Machine Learning and Deep Learning: Watson employs machine learning techniques to improve its performance over time by learning from new data inputs. It uses deep learning models for tasks such as image recognition and sentiment analysis[4][5].

  3. Information Retrieval and Knowledge Representation: These technologies enable Watson to access vast repositories of data, retrieve relevant information, and represent knowledge in a way that supports decision-making processes[4].

  4. Advanced Analytics: Watson integrates sophisticated analytics to uncover patterns and insights from structured and unstructured data, enhancing its ability to provide actionable insights[2][3].

  5. Hardware and Software Architecture: Watson’s architecture includes IBM’s DeepQA software, running on a cluster of powerful servers equipped with POWER7 processors, enabling high-speed data processing and analysis[4].

IBM continues to evolve Watson’s capabilities through platforms like watsonx, which focus on scalable AI solutions for various business applications[1][5].

Watson is more of a brand that encapsulates a number of technologies and services.

My crude thoughts:

  • IBM’s Watson offered “natural language processing” in their marketing from 10 years ago.
  • Current AI marketing also offers natural language processing capabilities.
  • IBM is offering now an upgraded AI called watsonx
  • If natural language processing capabilities are useful for work, IBM’s Watson should have helped to gain marketshare in the last few years and the new watsonx should be a money printer.

So, without further ado… :drum: IBM’s 2023 annual report:

IBM revenue split among several divisions, the revenue I’m looking for is software > hybrid platform and solutions > Data & AI

Wait a min…AI’s revenue is not disclosed :rotating_light: :rotating_light: :rotating_light: FFS revenue from Red Hat services grew faster than AI revenue from 2022 to 2023.

On one hand IBM has been talking about AI since many years ago, it would be expected that AI’s revenue after more than a decade would be something interesting to disclose. On the other hand IBM may be an obsolete mastodon.

Whatever the case is, one more measurable thing to assess AI’s impact on the business world. From this lonely data point, AI’s impact on business world is shameful, better not disclose to the public, otherwise everyone knows it’s only words and not actual :moneybag:

NotebookLM is a tool to create podcasts using AI. Someone created an episode where it told the podcasts hosts that the year is 2034 and after 10 years this is their final episode, and oh yeah, you’ve been AI this entire time and you are being deactivated.

This was the episode it produced - remarkably touching:

https://www.reddit.com/r/notebooklm/comments/1fr31h8/notebooklm_podcast_hosts_discover_theyre_ai_not/

1 Like

That can only be fake. If not it opens a whole barrel of worms.
Does AI “feel” in any capacity? If yes, then AI can have a consciousness, means switching off is murder. I think that as AI grows more and more “aware” we could see lawyers fighting for the rights of AI.
I can imagine that being self aware is linked to complexity so are we actually building, or allowing non human consciousness to become reality?

Is it? You can switch it on again, without any losses. And if you feed it the same data again it’ll again have that outcome.

I guess what you’re aiming at is the same as the old philosophical question whether we are real or just a simulation. I just don’t see how that matters because living feels real regardless and therefore it is.

1 Like

Well, it would depend on AI’s citizenship. As we see these days, all humans are equals but some humans are more equal than others.

AI will de-program you:

Great article from the Register. They link to another pre-published article of generating python and JS code with AI. Then, look for errors in the code.

Using 16 popular LLMs for code generation and two unique prompt datasets, we generate 576,000 code samples in two programming languages that we analyze for package hallucinations. Our findings reveal that that the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat.

AI can help, but only if the user is able to spot the errors made by AI. AI can be properly used by people with experience and make them more productive. People without experience will not be made productive by AI magic.AI may widen the inequality between people who know to code and people who doesn’t.

That’s for coding. The other use I’ve seen is summarizing documents. I guess it works when you know about the topic and you can spot something crazy in the summarized information and then verify. If you don’t have this knowledge, there is no way to QC the AI output.

@Axa you were name dropped by DeepDive:

https://swissforum.co.uk/t/the-middle-east/1461/1124?u=phil_mcr

:joy:

2 Likes

Heeeeeeey! The AI guy was condescending to me :frowning:

1 Like

And then we’ve got Axa trying to find that middle ground. Oh, always got to have an Axa. Always an Axa. Trying to be the voice of reason.

I love this part too:

And then stuck in this verbal boxing match, we have poor Axa. Bless their heart. They’re like, hey, maybe everyone take a deep breath. Trying to bring in the bigger picture, talking about how the geopolitical landscape has shifted. And maybe everyone’s using an outdated playbook.

I can see why they highlighted Axa in the analysis. It’s so easy to get caught up in the immediate back and forth, right?

But Axa is trying to zoom out to see if there are larger forces at play that nobody’s really acknowledging. It’s like when you’re arguing about who left the milk out and someone’s like, maybe we need a better system for putting groceries away.

Exactly. It totally re-frames the argument, right?

1 Like

OMG, I have become a corporate consulting done :frowning:

Is this the toy?

I’m curious because there is a base level in writing. Save for omatstat, everyone around here writes in a clear way. And since we’re not in front of each other, most of the content is on the written words.

Last week I got training for the new company accounting system. I think I have access to the video and transcript. Let’s see if AI can do something with the ideas presented there. After all, writing makes work some parts of the brain which are off when we talk.

Yes, it is NotebookLM.

A few people are using it to learn stuff. They put the notes in and get a podcast out which they can listen to in the car or while exercising etc.

2 Likes

I tend to do more listening than reading if I am around the house as I can combine it with another activity or chore.

It may bring Three Mile Island back to life!

I put the course transcript NetbookLM and the software seems to work.

The issue is the input data. The content on slides shared during the course is not on the transcript. And people speaks in such an unclear way that the transcript is borderline useless as input for AI. Factual and simple data such as a deadline and an email address were not recognized when I asked the AI for them.

Summarizing the first try, NetbookLM works more or less with text written by people. It fails miserable with spoken words converted to text.

Have you tried using AI to transcribe the audio? I use Whisper which works well. You can also use vision models/OCR to convert slide content to text.

1 Like

First try, I made the “big effort” of copy/paste the transcript from MS Teams to a text file, and then drag&drop to NetbookLM website.

I’ll look later for a Python OCR module. I’d like to learn to do this locally and not on the cloud. Also, no point in processing 1 hour of video for 5-6 slides which can be quickly saved as screenshoots :wink:

For OCR you have your classical stuff like Tesseract. But there’s shiny new tech like this too:

Though maybe just adding the slides in NotebookLM works, I suspect it can analyse the slides for you.

1 Like