Are the limits of composability, the limits of understanding? (1)
Why should the universe be knowable in the first place?
If it is knowable by necessity, why it is that it should be fully knowable by us and not other species?
Are we getting back to Kantian ideas, where the noumenon is unknowable, and the phenomenon is only if composable? (Is the phenomenon a composable construct based on the noumenon?) (2)
Mathematics is all about composability, and it turns out there's an isomorphism relating mathematical proofs and computation models: Curry-Howard-Lambek isomorphism (3), which implies proof systems on one hand, and the models of computation on the other—are in fact the same kind of mathematical objects.
Digital physics seems to be the epitome of applied composability (4). We proceed to understand the world via quasi-psychological and computational metaphors.
Therefore at the heart of digitalism lies the assumption composability. But if a subset of the universe is composable does that mean the whole is? (Remember induction is incomplete, and so mathematical theories according to Goedel). (5)(6)
I guess this is the case where something that works is true because it works (the axiom of usefulness?). As Christopher Sutton noted yesterday we were discussing the limits of knowledge: (in these cases) "We just assume we're right, because it wouldn't have any utility for us to be wrong. We wouldn't be able to understand it anyway."
Even more, surpassing human level performance in Artificial Intelligence Systems becomes harder as we approximate it. Once we surpass human level performance there's no way to use (directly) human knowledge to improve the system, like: getting labeled data from humans, gaining insight from manual error analysis or improving bias/variance (this last thing is due to lack of information regarding the minimum error for a given task. We tend to use human level performance error as an estimate of bayes optimal error (theoretical best possible error). (6) (7)
Questions:
- In which way you think this form of skepticism helps science and technology (specially AI)?
- What are the consequences of the breakdown of composability to the simulation hypotehesis?
- To which extent human-level performance and non-composability is a limitation to AI? Can AI succeed at non-composable problems by learning new representations? (v.g. instead of using phonemes as a representation, let AIs learn their own features for coding spoken language.)
- Remember Wittgenstein: "the limits of my language are the limits of my world".
- https://en.wikipedia.org/wiki/Noumenon
- https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence
- https://en.wikipedia.org/wiki/Digital_physics
- https://plato.stanford.edu/entries/induction-problem/
- https://www.youtube.com/watch?v=tWLf-VRrVRM&index=84&list=PLBAGcD3siRDguyYYzhVwZ3tLvOyyG5k6K
- https://www.youtube.com/watch?v=OFEfbu2Ykaw&index=86&list=PLBAGcD3siRDguyYYzhVwZ3tLvOyyG5k6K
No hay comentarios:
Publicar un comentario