Siri 2.0 - Apple and generative AI

OrangeCream

Ars Legatus Legionis
55,362
Apple has been investing a lot of real estate on die to AI and have not really taking advantage of it. Have they been working on something that we have not thought off?
They’ve been using AI and I think you just don’t know it:

Those applications enable great user experiences, like searching for a picture in the Photos app, measuring the size of a room with RoomPlan, or ARKIT semantic features, as referenced in our research highlight 3D Parametric Room Representation with RoomPlan.

The framework uses a device’s sensors, trained ML models, and RealityKit’s rendering capabilities to capture the physical surroundings of an interior room.

The iPhone's keyboard on iOS 17 leverages a transformer model, similar to what OpenAI(the company behind ChatGPT) uses in its own language models, to learn from what you type on your keyboard to better predict what you might say next, whether it's a name, phrase or curse word.

As previously mentioned, every time you use Face ID to unlock your iPhone or iPad, your device uses the Neural Engine. When you send an animated Memoji message, the Neural Engine is interpreting your facial expressions.

Apple has included Neural Engine chips in all iPhones since the iPhone X, and these provide the computing power behind Memoji, Face ID, and the newly unveiled Live Voicemail.

With the power of the Neural Engine, Live Voicemail transcription is handled on-device and remains entirely private.

Autocorrect receives a comprehensive update with a transformer language model, a state-of-the-art on-device machine learning language model for word prediction — improving the experience and accuracy for users every time they type.

In Photos, the People album uses on-device machine learning to recognize more photos of a user’s favourite people, as well as cats and dogs.


The first of the iOS 15 features that screamed “Machine Learning!” to me was Live Text. Live Text is a feature in iOS 15 that enables your iPhone to read text in your Photos app.

Another of the new iOS 15 features that uses the Neural Engine is object recognition in Photos. This feature works similarly to Live Text, except that it recognizes objects rather than text. The example Apple used is that you can point your iPhone camera at a dog, and your iPhone will not only recognize that it’s a dog but also which breed of dog it is.

Notifications will now be grouped in a Notification Summary, so you don’t see less important notifications crowding up your Lock Screen all day. You can customize the Notification Summary feature, or let the Neural Engine handle it for you.

In Maps in iOS 15, you’ll be able to point your camera around while walking. That will allow you to see AR directions projected on your environment. Say you’re trying to get to the movies and aren’t sure which road to take. You’ll be able to point your iPhone around and see directions highlighted on the streets and buildings around you.


Long story short: any time your phone or Mac is doing object recognition, text recognition, speech recognition, text summaries, image analysis, voice recognition, text prediction, maps prediction, calendar predictions, and speech to text, you’ll be using ANE. Those are pretty classic AI/ML tasks.
 

Tagbert

Ars Tribunus Militum
1,721
They’ve been using AI and I think you just don’t know it:

Those applications enable great user experiences, like searching for a picture in the Photos app, measuring the size of a room with RoomPlan, or ARKIT semantic features, as referenced in our research highlight 3D Parametric Room Representation with RoomPlan.

The framework uses a device’s sensors, trained ML models, and RealityKit’s rendering capabilities to capture the physical surroundings of an interior room.

The iPhone's keyboard on iOS 17 leverages a transformer model, similar to what OpenAI(the company behind ChatGPT) uses in its own language models, to learn from what you type on your keyboard to better predict what you might say next, whether it's a name, phrase or curse word.

As previously mentioned, every time you use Face ID to unlock your iPhone or iPad, your device uses the Neural Engine. When you send an animated Memoji message, the Neural Engine is interpreting your facial expressions.

Apple has included Neural Engine chips in all iPhones since the iPhone X, and these provide the computing power behind Memoji, Face ID, and the newly unveiled Live Voicemail.

With the power of the Neural Engine, Live Voicemail transcription is handled on-device and remains entirely private.

Autocorrect receives a comprehensive update with a transformer language model, a state-of-the-art on-device machine learning language model for word prediction — improving the experience and accuracy for users every time they type.

In Photos, the People album uses on-device machine learning to recognize more photos of a user’s favourite people, as well as cats and dogs.


The first of the iOS 15 features that screamed “Machine Learning!” to me was Live Text. Live Text is a feature in iOS 15 that enables your iPhone to read text in your Photos app.

Another of the new iOS 15 features that uses the Neural Engine is object recognition in Photos. This feature works similarly to Live Text, except that it recognizes objects rather than text. The example Apple used is that you can point your iPhone camera at a dog, and your iPhone will not only recognize that it’s a dog but also which breed of dog it is.

Notifications will now be grouped in a Notification Summary, so you don’t see less important notifications crowding up your Lock Screen all day. You can customize the Notification Summary feature, or let the Neural Engine handle it for you.

In Maps in iOS 15, you’ll be able to point your camera around while walking. That will allow you to see AR directions projected on your environment. Say you’re trying to get to the movies and aren’t sure which road to take. You’ll be able to point your iPhone around and see directions highlighted on the streets and buildings around you.


Long story short: any time your phone or Mac is doing object recognition, text recognition, speech recognition, text summaries, image analysis, voice recognition, text prediction, maps prediction, calendar predictions, and speech to text, you’ll be using ANE. Those are pretty classic AI/ML tasks.
A lot of people associate AI with a ChatGPT-style answerbot but that is not necessarily the most useful way to deploy AI for Apple and its customers. I’d rather see it used to enhance how we use the apps and abilities of the phone and give a richer way to interact with those features.
 

Honeybog

Ars Scholae Palatinae
2,075
iOS 17.4 has a (great) new feature that provides automatic transcripts for Podcasts. Some Podcasts only have transcripts for episodes from the past few weeks, but others, like the BBC’s In Our Time podcast, have everything transcribed going back to 2002.

It seems like a massive undertaking to transcribe twenty years of even just one weekly podcast, and I kind of wonder if Apple didn’t do this in part to generate some really massive training sets.
 

OrangeCream

Ars Legatus Legionis
55,362
iOS 17.4 has a (great) new feature that provides automatic transcripts for Podcasts. Some Podcasts only have transcripts for episodes from the past few weeks, but others, like the BBC’s In Our Time podcast, have everything transcribed going back to 2002.

It seems like a massive undertaking to transcribe twenty years of even just one weekly podcast, and I kind of wonder if Apple didn’t do this in part to generate some really massive training sets.
I mean processing audio data is fast. It wouldn’t take even one month to process that data, and it can be processed in parallel once trained.
 

dal20402

Ars Tribunus Angusticlavius
7,234
Subscriptor++
I have seen, personally, the widely reported issue of ChatGPT making up legal citations. It appears to think it's OK to synthesize citations that look a lot like real ones. I don't trust anything it tells me about law that I don't already know. It is a very dangerous tool if used to orient oneself in an area of law that is directly adjacent to a well-known one, which is something lawyers do ALL THE TIME.
 
  • Like
Reactions: mklein

ZnU

Ars Legatus Legionis
11,694
Hallucinating citations is pretty much already solved, that solution just isn't implemented in ChatGPT. Hook the model up to a document database. Have it pull everything relevant into its context window (impossible 12 months ago; now there are models with perfect recall over 1000+ page windows). Generate several candidate responses, then check them against each other and the source text in additional passes.

People should be careful not to let present limitations color their understanding of the possibilities here too much. This tech is too new for anyone to have a good grasp on which limitations will prove enduring and which won't.
 

OrangeCream

Ars Legatus Legionis
55,362
I have seen, personally, the widely reported issue of ChatGPT making up legal citations. It appears to think it's OK to synthesize citations that look a lot like real ones. I don't trust anything it tells me about law that I don't already know. It is a very dangerous tool if used to orient oneself in an area of law that is directly adjacent to a well-known one, which is something lawyers do ALL THE TIME.
It’s not thinking so it’s not making a judgement. It’s not even synthesizing citations technically. It’s just a really long autocorrect string where words are pasted together in statistically likely manners.

It’s like trusting autocorrect to generate your replies in a forum. Here is what autocorrect does for me:

I have a lot of followers and I have a ton of followers and followers so I don’t know how to use it in a forum.
 

wrylachlan

Ars Legatus Legionis
12,769
Subscriptor
And what is thinking but an emergent behavior?
It may just be that the language center of our brain is an advanced series of autocorrects.
That seems very likely to me with the caveat that ‘advanced’ is doing some heavy lifting there and ‘series of’ is exactly right but subtle. My guess is that if you put the average ChatGPT skeptic in a Time Machine and sent them forward 100 years to when we know substantially more about the brain, they would be shocked at how ChatGPT-like our cognition is.
 
  • Like
Reactions: ant1pathy
My guess is that if you put the average ChatGPT skeptic in a Time Machine and sent them forward 100 years to when we know substantially more about the brain, they would be shocked at how ChatGPT-like our cognition is.

The inverse may be true as well: that they may be relieved to see how unlike ChatGPT our cognition is.
 

Honeybog

Ars Scholae Palatinae
2,075
Mark Gurman (via MacRumors) had a report a few days ago about the anticipated "AI" features planned for iOS 18 and it's kind of interesting. There's surprisingly few "generative" features, and a lot of it seems like stuff that shoulda woulda been added into iOS already if Apple didn't get distracted from its earlier ML push.

  • Photo retouching.
  • Voice memo transcription.
  • Suggested replies to emails and messages.
  • Auto-generated emojis based on the content of a user's messages, providing all-new emoji for any occasion beyond the existing catalog.
  • Improved Safari web search.
  • Faster and more reliable searches in Spotlight.
  • More natural interactions with Siri.
  • More advanced version of Siri designed for the Apple Watch, optimized for "on-the-go tasks."
  • Smart recaps of missed notifications and individual messages, web-pages, news articles, documents, notes, and more.
  • Developer tools for Xcode.

Photo retouching is a no-brainer. It's kind of amazing that they've taken this long.

Voice memo transcription seems obvious, given that they've rolled it out for podcasts. I'm actually kind of excited for this one, because the normal dictation doesn't really do it for my needs.

Suggested replies: Kind of already exists between keyboard suggestions and Apple Watch's smart replies, no? Presumably, it would be similar, but longer. Assuming Gurman's list is inclusive, this seems like it's the closest Apple will get to generative content. It'll be interesting to see if this draws from a users' previous messages. I almost never use the Apple Watch smart replies, because they're so completely different from my actual voice. I'm also not looking forward to the looming feature where everyone is just trading generated missives back and forth.

Emojis: It'll be interesting to see what happens here. Apple being Apple, I doubt we'll see anything scandalous, but I'm sure people will complain about them being too limited.

Improved Safari search: Make the top suggestion a Wikipedia entry, not a Fandom wiki entry. There, improved.

Spotlight: Is this a problem anyone has? If anything, I wouldn't want Spotlight to be less strict with search terms.

"More natural interactions with Siri" it's interesting that this is phrased as improving interactions with Siri, not improving Siri's usefulness.

I have no idea how to read the Siri on Apple Watch thing.

Recaps = Summarizer?

Xcode getting a Github-style copilot sounds useful.

I can't say I'm hugely excited about most of these, but it would be kind of nice to see Apple try and steer things away from generative LLM and towards useful tools.
 

OrangeCream

Ars Legatus Legionis
55,362
Mark Gurman (via MacRumors) had a report a few days ago about the anticipated "AI" features planned for iOS 18 and it's kind of interesting. There's surprisingly few "generative" features, and a lot of it seems like stuff that shoulda woulda been added into iOS already if Apple didn't get distracted from its earlier ML push.
I'm not sure that they should have been added, because way back when the technology wasn't as mature as it is now.
Photo retouching is a no-brainer. It's kind of amazing that they've taken this long.
The devil is always in the details, because 'retouching' has a plethora of meanings.

Does it mean enhance portraits, skin blemishes, closed eyes, and exposure? Because that's entirely different than erasing power lines, removing cars, and re-arranging people.
Suggested replies: Kind of already exists between keyboard suggestions and Apple Watch's smart replies, no? Presumably, it would be similar, but longer. Assuming Gurman's list is inclusive, this seems like it's the closest Apple will get to generative content. It'll be interesting to see if this draws from a users' previous messages. I almost never use the Apple Watch smart replies, because they're so completely different from my actual voice. I'm also not looking forward to the looming feature where everyone is just trading generated missives back and forth.
This is going to be interesting if it 'talks like you'.
Spotlight: Is this a problem anyone has? If anything, I wouldn't want Spotlight to be less strict with search terms.
Yes, this is a problem. Try searching for 'HSA receipts for tax year 2021'; first it has to identify receipts, summarize the contents, determine if they are HSA approved, and if the purchase date happened in 2021
I have no idea how to read the Siri on Apple Watch thing.
Currently Apple Watch can only do things like send a message, set a timer, open apps, start an activity, and connect to the internet for more advanced queries. I wasn't able to create a Calendar event using my Watch.
I can't say I'm hugely excited about most of these, but it would be kind of nice to see Apple try and steer things away from generative LLM and towards useful tools.
That's been Apple's MO for the past decade. I'm looking forward to further AI enhancements in the Camera App
 
  • Like
Reactions: Tagbert

Honeybog

Ars Scholae Palatinae
2,075
I'm not sure that they should have been added, because way back when the technology wasn't as mature as it is now.

I mean, retouching, blemish removal, etc. has been a bog standard image editing feature for a long time now.

As for the maturity of the technology, the actual details will matter of course, but on the surface, not a lot here seems super cutting edge. Apple has been touting ML features for, what, a decade? They certainly could have implemented a lot of these earlier if they had the impetus to.

I’m really not complaining, though. If Apple needs to get on the hypewagon, I’m happy that this suggests they’re doing it in the least Microsoft-y way possible.

Yes, this is a problem. Try searching for 'HSA receipts for tax year 2021'; first it has to identify receipts, summarize the contents, determine if they are HSA approved, and if the purchase date happened in 2021

I would have just put those in “~/Documents/Financial/Taxes/2021/Receipts/HSA/“. :geek:

I wasn't able to create a Calendar event using my Watch.

Really? I just added one with Siri on my ancient S5.
 

wrylachlan

Ars Legatus Legionis
12,769
Subscriptor
I can't say I'm hugely excited about most of these, but it would be kind of nice to see Apple try and steer things away from generative LLM and towards useful tools.
I’m not sure what you mean here as most of what’s on that list likely has some element of LLM under the hood except the visual elements but then that’s still generative AI.
 

Honeybog

Ars Scholae Palatinae
2,075
I’m not sure what you mean here as most of what’s on that list likely has some element of LLM under the hood except the visual elements but then that’s still generative AI.

I was actually saying that it’s closer to what you have been advocating for in this topic, as opposed to what most people assumed it would be (ChatGPT-esque, like literally everyone else is doing).
 

wrylachlan

Ars Legatus Legionis
12,769
Subscriptor
I was actually saying that it’s closer to what you have been advocating for in this topic, as opposed to what most people assumed it would be (ChatGPT-esque, like literally everyone else is doing).
Gotcha. So moving away from ‘Chat’ as the modality for leveraging LLMs. Agreed, LLMs and generative AI writ large are so much more than just a chat engine.
 
I mean, retouching, blemish removal, etc. has been a bog standard image editing feature for a long time now.
Not using ML. It requires licensing the work of tens of thousands of professional artists with before/after pairs as training data to develop the feature using ML.
As for the maturity of the technology, the actual details will matter of course, but on the surface, not a lot here seems super cutting edge. Apple has been touting ML features for, what, a decade? They certainly could have implemented a lot of these earlier if they had the impetus to.
The issue isn't the inference, it's creating the datasets and models and then training them. The fact that research papers trying to solve this problem were being published even in 2023 means it's still an issue:
Face retouching aims to remove facial blemishes, while at the same time maintaining the textual details of a given input image. The main challenge lies in distinguishing blemishes from the facial characteristics, such as moles. Training an image-to-image translation network with pixel-wise supervision suffers from the problem of expensive paired training data, since professional retouching need's specialized experience and is time-consuming

I would have just put those in “~/Documents/Financial/Taxes/2021/Receipts/HSA/“. :geek:

So you concede that the feature doesn't exist yet. If I wanted to create the above folder I would be using Spotlight to do so.
Really? I just added one with Siri on my ancient S5.
Did you make sure to turn off the radio on your iPhone and step away so that the iPhone mic can't hear you?

I could only get it to work when I turned on my iPhone's BT and WiFi
 
  • Like
Reactions: Tagbert

cateye

Ars Legatus Legionis
11,760
Moderator
I would love to have a conversational Siri. I'm pretty sure I'm not the only one ¯\(ツ)

To the extent that Apple's AI ambitions could allow Siri to have far better awareness of my data in order to be conversational would be nice. There are hints of that already—the way Siri can spot references to appointments in emails and will queue them as potential Calendar entries. That, just with far more density and capability. "Siri, can you summarize in a few sentences the last 24 hours worth of emails from my client Foo?" — I would use that constantly, and it being verbal/conversational would, in a way, be more useful to me as a momentary way to engage information than a text- or action-based UI.
 

Tagbert

Ars Tribunus Militum
1,721
To the extent that Apple's AI ambitions could allow Siri to have far better awareness of my data in order to be conversational would be nice. There are hints of that already—the way Siri can spot references to appointments in emails and will queue them as potential Calendar entries. That, just with far more density and capability. "Siri, can you summarize in a few sentences the last 24 hours worth of emails from my client Foo?" — I would use that constantly, and it being verbal/conversational would, in a way, be more useful to me as a momentary way to engage information than a text- or action-based UI.
I don't know if they will have that level of interaction ready by June, but I hope that they are working toward that over the next year or two.

They are surely aware of the appeal of this video, Apple Knowledge Navigator Video (1987):

View: https://youtu.be/umJsITGzXd0?si=6KlZXpsIOfK4kJPT


We may finally be getting to the tech level where something like this is possible.
 

wrylachlan

Ars Legatus Legionis
12,769
Subscriptor
Just wondering here without much knowledge — can’t Apple use forthcoming TSMC 2nm chips and create some sort of enterprise LLM type of GPU hardware rack that could compete with Nvidia on a cost-per-watt metric, to greatly reduce compute power and win the day?
No. The LLM itself is far more important than the hardware it’s running on. And 2nm efficiency isn’t going to unlock some new LLM that isn’t possible without it. It’s incrementally more efficient, not categorically.

Honestly I think Apple’s general approach to ML/AI - the path they’ve been on for years - is the best strategy for winning at ML/AI. They just need to resource it a little better and push harder. Building core models as a shared service in the OS and then having every corner of the OS/app ecosystem use those models to implement features is the right approach. Just keep going.
 
No. The LLM itself is far more important than the hardware it’s running on. And 2nm efficiency isn’t going to unlock some new LLM that isn’t possible without it. It’s incrementally more efficient, not categorically.

Honestly I think Apple’s general approach to ML/AI - the path they’ve been on for years - is the best strategy for winning at ML/AI. They just need to resource it a little better and push harder. Building core models as a shared service in the OS and then having every corner of the OS/app ecosystem use those models to implement features is the right approach. Just keep going.

Also you don't want reticle-limited training chips as the first chips on a new node, that would be be a great way of almost burning money. Small highly-redundant chips like mining ASICs have been used as pipe-cleaners in recent years by TSMC. Apple makes their new chips when a new node has a reasonable yield for "normal" sized SoCs.

Even if Apple did decide to try racing ahead by using a new node to make chips for internal use rather than to sell for a high margin (an odd approach to business if you ask me), that's maybe half the story for ML hardware, as there's a whole load of I/O and networking expertise needed as well. NVSwitch chips are themselves large high-end ASICs. You need very fast DPU/NIC hardware too. Google goes even further and has MEMS optical switching for the TPU Pods. Does Apple have this expertise?

Google and Nvidia have about a decade of designing ML focused chips behind them, and they sink huge amounts of money into this effort. There are litterally dozens of companies spamming ALUs/FPUs on chips pitched as GPU/TPU killing parts, and by and large none of the competition has had even modest success. Scaling-up systems and the associated software is a far more difficult task than designing a cut-and-paste ASIC and paying TSMC to make it. There are loads of "killer chips" on paper, but they aren't getting much adoption.

If this stuff was easy Nvidia wouldn't be raking in the obscene amounts of cash they are. Nvidia are going to be so flush with cash they'll be able to pursue all sorts of interesting opportunities in the years ahead.
 
  • Like
Reactions: mklein

wrylachlan

Ars Legatus Legionis
12,769
Subscriptor
I think an open question is whether there’s a limit to the ROI for the naive ‘increase the number of nodes in the model’ approach to AI. NVIDIA and others will benefit greatly is throwing more transistors at the problem gives you a benefit in perpetuity. But my suspicion is that sooner rather than later the how the model is constructed (its design) rather than how many nodes does it have will be the most important factor.
 
While I'm optimistic about AI, I could see it being another voice assistant 2.0 if it just doesn't work reliably. Google and Amazon are laying off people from those teams because people didn't want the trojan horse hardware, Microsoft is removing Cortana from Windows, I don't use Siri.

They're command line interfaces, they can do simple things, and I get frustrated when they don't understand me or just brings up a web search. Cool, they didn't save me any time. I just don't think they're very good.

I have used Google's NotebookLM to summarize journal articles by topic, and it's pretty slick. However, even when providing sources, it has been wrong and I can't trust wrong. I work in forensics, and I don't trust e-mail summaries if I think it's going to miss receiving a subpoena. And I work with PII, security is something I have to be concerned about.

Currently, Siri wants to change the time I pick up a rental car on Saturday. Which is odd, as the time it's wanting to change my calendar to is my current time zone, not where I'm going to be at. Useless.

Microsoft and Apple seem hellbent on putting AI in their OS' and I get it (new shiny, more buzzwords), but I'm also not at a point where I trust it currently. As an end user, I'm not clamoring for this, just vague feature adds, now featuring AI.
 
I think an open question is whether there’s a limit to the ROI for the naive ‘increase the number of nodes in the model’ approach to AI. NVIDIA and others will benefit greatly is throwing more transistors at the problem gives you a benefit in perpetuity. But my suspicion is that sooner rather than later the how the model is constructed (its design) rather than how many nodes does it have will be the most important factor.

Oh lots of people have rightly observed that LLM growth and what fabs are producing are diverging like a rocket taking off. It's implausible that the approach can be sustained. A back of an envelope calculation will show how ridiculous it is.

One thing that might push Apple to go in-house is customisation. A lot of people wonder why Google doesn't just give up and pay Nvidia even more billions on top of the billions they already pay them for GPUs, but that's because people run generic benchmarks and draw erroneous conclusions about TPU performance. Google designs TPUs for their workloads, and they buy Nvidia GPUs to offer in the cloud to companies that are CUDA users. Google's TPUs are not designed to win benchmarks (which Google does not care about) but to offer more bang per puck for Google's needs (which saves Google billions). Maybe Apple has needs where designing a bunch of ASICs makes sense, but I don't get the impression that for Apple it's as vital as it was for Google. In Google's case it was basically "double the server fleet or design chips". :)
 
  • Like
Reactions: Bonusround

wrylachlan

Ars Legatus Legionis
12,769
Subscriptor
LLM growth and what fabs are producing are diverging like a rocket taking off
There are far more organizations thinking that they will be able to monetize LLMs than actually will be successful. It will be interesting to see what happens to the chip market when all this comes crashing down in 3-5 years.
 

Bonusround

Ars Scholae Palatinae
1,060
Subscriptor
I never claimed it was :). I did say that transcribing podcasts to train AI would be incredibly brazen.
Using freely-available audio content, from a public catalog, with the side benefits of furthering accessibility of said content? I don't understand the harm.

Could said model be used for other purposes? Sure. Will Apple offer a tool that generates entire podcasts from scratch? Unlikely.
 
Last edited:

Bonusround

Ars Scholae Palatinae
1,060
Subscriptor
Oh lots of people have rightly observed that LLM growth and what fabs are producing are diverging like a rocket taking off. It's implausible that the approach can be sustained. A back of an envelope calculation will show how ridiculous it is.
There are far more organizations thinking that they will be able to monetize LLMs than actually will be successful. It will be interesting to see what happens to the chip market when all this comes crashing down in 3-5 years.

Yes, sizing up to be a textbook SV bubble and contraction. Though I wonder whether the consolidation for this particular cycle might take a bit longer. With so much fundamental research ongoing, at such a blistering pace, it feels like it could be a while before the workloads presented by these models, much less the models themselves, are fully characterized and understood.
 
  • Like
Reactions: gabemaroz
Microsoft and Apple seem hellbent on putting AI in their OS' and I get it (new shiny, more buzzwords), but I'm also not at a point where I trust it currently. As an end user, I'm not clamoring for this, just vague feature adds, now featuring AI.
There’s no evidence that Apple is hellbent on putting AI in their OS in the same way Microsoft is. There is seven years of evidence that Apple is thoughtfully integrating AI into the product where it makes a difference in capability and usability.

I 100% expect Apple to integrate AI into data detectors:
Data detection methods in other frameworks detect common types of data represented in text, and return DataDetection framework classes that provide semantic meaning for matches.
 

Bonusround

Ars Scholae Palatinae
1,060
Subscriptor
I 100% expect Apple to integrate AI into data detectors:
Data detection methods in other frameworks detect common types of data represented in text, and return DataDetection framework classes that provide semantic meaning for matches.
Second that 100%. Data Detectors and the (sometimes) closely related 'Siri Intelligence' both feel like partially-executed features whose full vision has not been realized. And we have waited for sooo long.

To wit: please explain, in this day and age, why the act transferring concerts, air travel, and other events onto my calendar is such a clumsy and inconsistent experience. WHY?? <shakes fist at sky>

If the AI push resolves this I will be forever grateful... it's low-hanging fruit at this point. IMO
 
Second that 100%. Data Detectors and the (sometimes) closely related 'Siri Intelligence' both feel like partially-executed features whose full vision has not been realized. And we have waited for sooo long.

To wit: please explain, in this day and age, why the act transferring concerts, air travel, and other events onto my calendar is such a clumsy and inconsistent experience. WHY?? <shakes fist at sky>

If the AI push resolves this I will be forever grateful... it's low-hanging fruit at this point. IMO
Yeah. A ML DataDectector would analyze an email and extract calendar entries, map entries, reminders, alarms, and summaries.

A calendar specific AI model would analyze your calendar and generate a daily itinerary with suggested reminders, alerts, routes, and alarms.

A contacts specific AI model would suggest sending to itineraries to specific people, though obviously you can change the to/cc/bcc lists

That’s stuff is what I expect out of Apple.
 
  • Like
Reactions: Tagbert

iljitsch

Ars Tribunus Angusticlavius
8,474
Subscriptor++
please explain, in this day and age, why the act transferring concerts, air travel, and other events onto my calendar is such a clumsy and inconsistent experience. WHY?? <shakes fist at sky>
Good question. iCalendar has been around for more than two decades, so what gives?

If the AI push resolves this I will be forever grateful... it's low-hanging fruit at this point. IMO
But it won't, as neural networks inherently have an error rate.

While it's bad to have the wrong thing happen consistently, arguably, it's even worse to have the right thing happen inconsistently.
 
  • Like
Reactions: gabemaroz