Since I'm very interested in numpy and vectorization, I became curious about when how this approach appeared. I find this notion very convenient and very productive, while it leaves the space for optimizations inside.

Obviously, a predecessor of numpy is MATLAB, which appeared a long time ago - in the late 70's, while the first release happened more than 30 years ago - in 1984! Many things were inherited from this language (not the multiCPU/GPU backend, unfortunately, but I hope that things will change once).

But more interesting point that vector-based syntax was not brought by fortran, as I thought earlier. The latter got it's operations over vectors only in Fortran'90 (which was much later).

The real predecessor of MATLAB was APL (A Programming Language, not named after Apple company), this language was quite complicated and short, and mostly this was looking not like a code, but as numerous formulas. Apart from this, APL used many specific symbols and combined operators. For instance, +\ seq is sum of elements in a sequence.

Wikipedia claims that this returns list of prime numbers in $1, 2, ..R$.

(~R∊R∘.×R)/R←1↓ιR

Now you understand why this language isn't popular today :)

Since that we learnt that many things like map, where, sum can be written using words, and this will be much more clear and reliable.

Thing I also find interesting is that APL interpretation process was much more complicated then one of numpy or matlab, and in particular supported lazy evaluation (thus, construction of expressions was available). This also means that different optimization were possible, like usage of multiple different vector operations at once (like a + b*c), if processor supports this. In russian this is called векторное зацепление, but I haven't found appropriate translation in wiki.