It reminds me of a manufacturer going through a LEAN process. They measured how long it took the most experienced person to assemble the product, and took his best measurements as the standard. They met the target but the product had so many flaws it couldn't be released. It took twice as long to dismantle and reassemble the product than any prior measurements found. But their metric was met!
What's interesting to me are the fairly public AI flops of late....
....but I think that's because a few companies got rushed to release their AI projects in order not to be "late to market"... and probably released over objections of their testing folks. Oh well. The users will test if nobody else will.
I use AI tools for all sorts of things (I mentioned my transcription tools) - there's all sorts of narrow-focus tools that work pretty well. It's the ones aiming for "general" AI that tend to flop more.
It reminds me of a manufacturer going through a LEAN process. They measured how long it took the most experienced person to assemble the product, and took his best measurements as the standard. They met the target but the product had so many flaws it couldn't be released. It took twice as long to dismantle and reassemble the product than any prior measurements found. But their metric was met!
This may be a con for AI?
What's interesting to me are the fairly public AI flops of late....
....but I think that's because a few companies got rushed to release their AI projects in order not to be "late to market"... and probably released over objections of their testing folks. Oh well. The users will test if nobody else will.
I use AI tools for all sorts of things (I mentioned my transcription tools) - there's all sorts of narrow-focus tools that work pretty well. It's the ones aiming for "general" AI that tend to flop more.
Your post is potent.