If you are reading this post, I have a feeling that at some point in our life, we sometimes get compliments from people. One of the compliments I got in my younger age is that I’m a witty and smart kid. And as a “witty kid”, my response to them is something along the line “smart is only a relative concept, if I surround myself with people of the same intelligent level as I am, I am no better than a “normal” individual”. The hidden message of my response is “by labelling me as smart, you are effectively labelling yourself as someone with a lower intellect”, but luckily, no one really pick up on that (and I promise you I don’t really mean that, it’s just a little joke to my self, I don’t even believe intelligence can be measured, yet). The response initially came off as I trying my best to be humble (again, I don’t really believe in the hidden message of that response), but in retrospect, this probably the evidence for my early interest in Intelligence.
I can’t remember when do I start putting more and more serious “thinking energy” in physiology, but for the longest time, I have been struggling to find a good, general but conclusive, layman definition for what Intelligence actually means, until recently when I came across one from a random guy/gal on a random forum. I don’t remember it word-to-word, but I have since modified the definition to fit my view: Intelligence is the ability to view problems in abstract form and the ability to relate it to those abstractions that previously came across.
What I love so much about this definition is how it does not specifically state that if someone who is considered intelligent, that means they can solve some complex math problems or play some complex musical piece using some complex instrument. In another words, the definition generalises Intelligence. And the more time I spend thinking about this, the more I realise how accurate this definition is. No seriously, think about it. When someone explains to you a new concept, the first thing you would do is to try to relate the new knowledge to the previous knowledge that you are well understood. For example, the concept of Ohm’s Law is normally explained by using water flowing through a pipe. Water is Current, the Area of a pipe is Resistance and the height of one point in the pipe Voltage (assuming water flows because of solely gravity). Someone who is more of a visual thinker (such as I am) might appreciate this photo better:
But how could Water = Current and how could the photo above represent Ohm’s Law? The only plausible answer I could give myself is that we human tend to think in term of abstraction (just like how I previously defined intelligence!). I hope we can both agree that water is not electricity, but in some sense, the two has their similarity: they both flow, they both have direction, they both have different intensity depends on the surrounding environment (my first year professor hates this analogy, no idea why). Not all people understand how electricity works, but hopefully most people have a general understand how water flows. The fact that we use water as an analogy to electricity supports my little theory.
The AI of Everything
Ever since the movie “The theory of everything”, I got hook on that concept. I don’t remember how they portrayed Stephen Hawking in the movie, but I do remember the intrigue feeling when they introduce a theory that could potentially explain the whole universe. I don’t really know what that theory is nor I care enough to look into it, but I do enjoy the idea of having something that is so general (and so complex), you can literally derive anything from it. My version of this, but for AI, is a single function, with a finite number of input, to output a single correct answer, for every question. This function must be generalised to a degree that if we model the problem (any problem) correctly, and type in the correct inputs to said function, we will get an abstract answer that can be translated back to real world meanings. Why is this possible? Because any problem can be modelled as an abstract concept, and (hopefully) if we make the abstraction to be general enough, all abstraction are relatable, just like water and current. And if the relatability (not a real word, but whatever) is high enough, there’s only one abstract to be modelled and therefore, there’s only one function to be written.
But there is nothing scientific supports for this!
Yes. It’s all a stupid ramble, possibly because of all the stress from exams that pushed what has been in the back of my head all these years to the internet. I’m out, peace!