Quantcast
Viewing all articles
Browse latest Browse all 16609

Will we have a DT 4 in the near future?

Sorry to see you resorting to such phrasing.

While you are aware that self-driving is costly develop, you seem unaware that machine learning is an even costlier technology. In both cases, there are big corporations which happily foots the bill for research and experiment. Once development has borne fruit, mass production would quickly put costs down to affordable levels, as has already happened with ML.

That’s not the key point, however. The divergence between us two is a reflection of the larger debate regarding the legitimacy of generative AI, which, among other things, extracts patterns from apparently unrelated data. The question: Should we actually use these unexplained patterns to predict (or “analysis”, if you prefer) singular future events?

Some people answer with a firm YES. They hold that it’s trivial to use these patterns once they have been generated. If the usage of these patterns would result in a mess, then be it. No one should be held responsible for simply (and legitimately) experimenting with available tools.

Some other people, including me, would say NO. One of our main points is that such patterns are unreliable, because they are either outright so, or have not been proven otherwise. Unreliable patterns may still be useful for provoking further thought, but they should definitely not be used for prediction/analysis. Another point is the lack of responsibility: who will pay the bill if someone elects to follow an unproven path, only to meet a dead end? Should the advocates for this path be held responsible in some way?

Both points of the second group have been reflected in my argument against use of the disruptive innovation paradigm, which is essentially generated through cherry-picking in histories of unrelated disciplines, without regard to the numerous counterexamples available.


Viewing all articles
Browse latest Browse all 16609

Trending Articles