Walid Saba, PhDinONTOLOGIKThere’s no Ambiguity, but Infinite Intensions: On Ambiguity, Language, and the Language of ThoughtInfinite IntensionsJun 231Jun 231
Walid Saba, PhDinONTOLOGIKLLMs don’t even do ‘approximate retrieval’ — embarrassingly, they try to recall some ‘similars’In an excellent new post Melanie Mitchell addresses a important issue related to large language models (LLMs) namely the nature of…May 155May 155
Walid Saba, PhDinONTOLOGIKA Refutation of John Searle’s Famous Chinese Room Argument?The ArgumentApr 285Apr 285
Walid Saba, PhDinONTOLOGIKLarge Language Models and what Information Theory tells us about the Evolution of LangaugeIn an article in The Gradient (and briefly in a previous post) I described what I called MTP — the Missing Text Phenomenon, which is the…Sep 26, 202213Sep 26, 202213
Walid Saba, PhDinONTOLOGIKWhy is “Learning” so Misunderstood?I have written a few posts where I make the point that most of the important knowledge that is needed to build intelligent agents is not…Sep 1, 20223Sep 1, 20223
Walid Saba, PhDinONTOLOGIKWhy Commonsense Knowledge is not (and cannot be) Learned(last edited August 29, 2022)Aug 28, 20223Aug 28, 20223
Walid Saba, PhDinONTOLOGIKCompositionality: the curse of connectionist AIA while back a 2-day online workshop on compositionality and AI was organized by Gary Marcus and Raphael Milliere, with additional…Aug 1, 20226Aug 1, 20226
Walid Saba, PhDinONTOLOGIKUniversally Valid Templates: one more time for the Deep Learners who appreciate proofsIntroductionJul 18, 20224Jul 18, 20224