Knowing-How vs. Knowing-That

Philosophers have long recognized the difference between two types of knowledge: knowing-how and knowing-that, where (roughly and very informally) the former is typically associated with skills and abilities, and the latter is associated with propositions (truths/established facts). In our everyday discourse we use the word ‘know’ for both types of knowledge, which creates some confusion. So, for example, we say things like:

(Last updated April 28, 2021)

Note: The literature on ‘compound nominals’ is immense, and you will find the same phenomenon discussed under the label ‘compound nominals’ or ‘nominal compounds’ — so, I will use these terms interchangeably.

What are Nominal Compounds and why do they Matter?

In simple words, a nominal compound (henceforth, NC) is 0 or more adjectives followed by 1 or more nouns. You can think of it as some subject or topic of discussion (semantically, an entity) that can fill some ‘slot’ in a larger discourse (subject, agent, location, theme, etc.) So while I can discuss a certain ‘system’ I can also discuss a ‘computer system’…

Note: The literature on ‘compound nominals’ is immense, and you will find the same phenomenon (which we will describe shortly) discussed under the label ‘compound nominals’ or ‘nominal compounds’. To decide which one to use here I consulted Google. The phrase ‘nominal compounds’ resulted in 21,900,000 results while the phrase ‘compound nominals’ resulted in 44,200,000 results (almost exactly double) — so, I will stick with ‘nominal compounds’.

What are Nominal Compounds and why they Matter?

In simple words, a nominal compound (henceforth, NC) is 0 or more adjectives followed by 1 or more nouns. You can think of it as some subject or topic of discussion (semantically, an…

A Personal Prologue

About four years ago I joined one of the coolest start-ups in Silicon Valley. For me, ‘cool’ here means that I was around some of the brightest people I ever met — with backgrounds in neuroscience, astrophysics, AI, computational mathematics, cognitive linguistics, … the group of (very) passionate and learned AI’ers was very vocal and eager to discuss, debate, learn and state an opinion, and I just love that environment. …

Generalization and Concepts

I can still recall the amazement I felt the first time it was fully explained to me how important our ability to abstract and generalize instances into abstract concepts was to our cognitive development. Without this brilliant invention we could not have had the cognitive abilities that, by far, surpass those of all other species.

Imagine that were not the case and that we could only reason at the instance (object) level. It would mean, literally, that every time we felt like eating a banana, we would need to taste it a bit to see if we would like how…

Of course, every thing “isa” thing

The image above might, at first read, sound silly. You might be saying: of course “everything is a thing” — so what? All I’m saying is that “every x is an x” which is vacuously true, because it is an empty statement with no information content to speak of.

Well, maybe — so far. But the statements in the image above do say, ontologically, something that is not trivial. If relations (friendship), events (war), properties (darkness), activities (dancing), states (death), etc. are objects like like birds and dogs, then, like any other object, relations can in turn participate in relations…

(last revised: March 20, 2021)

Not that the Chinese Room Argument (CRA) that the philosopher John Searle launched in 1980 as an attack on Computationalism needs another debate, but all the (in my opinion successful) rebukes of CRA concentrated on the ‘mechanics’ of the experiment missing in my view the central problem in Searle’s argument.

The Most Common Reply to CRA

Luminaries such as Dan Dennet, Jerry Fodor, etc. mostly used the Systems Argument: “while [John Searle] understands only English, when he is combined with the program, scratch paper, pencils and file cabinets, they form a system that can understand Chinese.” The technical details of…

Are neural networks just large lookup tables with fuzzy/approximate keys, as many cognitive scientists and analytic philosophers (of mind and language) have long concluded? If so, then on small domains there should be a simple scheme that performs as well as any neural network. In this post we report on testing this hypothesis.

Neural Networks as Large Lookup Tables: Small Domains

In scenarios where the number of weights (erroneously called parameters, in the DL literature) is much larger than the training data points, the network will simply converge with an acceptable error rate by memorizing all the training samples. In small domains, therefore, the network should (almost) perfectly…

Could I, who doesn’t know how to play Go, beat a Go grandmaster?

Could I (or anyone) who does not even know how to play chess or Go beat a chess or a Go grandmaster? I say I can. And here’s how: I entered a hotel lobby where I recognized two chess masters (CM1 and CM2) sitting on opposite sides of the lobby. I managed to challenge the two to a game of chess although I do not even know how to play the game. They obliged. I asked one of them to start, and asked the other if I could start. They also obliged. CM1 made the first move. I remembered the…

Humans are good at combining (ad infinitum) meaningful concepts to generate novel (composite) creations (in images, language, plans, etc.) These new creations are not just combinations that produce objects in that belong to the same class, but a new creation from different classes, resulting in a novel creation that, in turn, is a new call on its own.

A lot of (misguided) excitement has been proclaimed regarding so-called GANs: Generative Adversarial Networks. The brilliant AI-knowledgeable media, including ‘social’ media, is awash with articles and posts on deepfakes and how AI is already “creating” novel videos, novel images, and even novel music and text. In fact, the famously touted GPT-3 is one such example of AI touted as getting closer to Artificial General Intelligence (AGI). It has even made the ‘respectable’ Guardian write an article proclaiming that AI can already write news stories. (I expect more ridiculous claims, by the way)

What are GANs?

Be assured, and sleep well. Generative Neural Networks…

Walid Saba, PhD

Principal AI Scientist, ONTOLOGIK.AI

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store