I have no clue about the first, first time hearing it.
About the second, I'm very familiar about it within computer science:
https://en.wikipedia.org/wiki/Computational_complexity_theory
It's essentially the theory that tries to classify different problems into "classes of complexity" so that given a problem and it's size (defined as the size of input data) we can expect the solution to the problem be obtained in finite time, or we can even estimate the time on a machine with given amount of memory and given processor speed/number of processors.
Roughly speaking, there exists 3 broad classes of problems: linear problems (resolvable in linear time or "near linear" time), polynomial problems (e.g. resolvable in square time) or exponential problems (that we sometimes call "untrackable"). We use deterministic Turing machine as a computational model here.
But this classification is not definite in the computational complexity world. In particular it hasn't been proved that exponential problems & polynomial problem are disjoint, I.e.: that exponential problems don't have polynomial solutions. It just happens that no algorithm has been found that can resolve exponential problems in polynomial time.
In practical terms, even polynomial algorithms (say cube time) are too slow for most problems of today when we have big amount of data (routinely billions of samples) as an input. That's because billion nanoseconds (a time of a single operation in modern computers) cubed equals 10^18 seconds which is many thousand years. You don't want to wait so long for your problem to be resolved. Whereas, if the problem is linear, for a input size of a billion samples, your computer can resolve it in roughly billion nanoseconds which is 1 second.