My time working with IBM's LIME (LIquid MEtal) was eye-opening and exciting. Working within a paradigm that hadn't truly been exploredbefore, with a language as it was developing on the bleeding edge of tech was just an amazing experience. And working closely with peoplefrom IBM during the whole process was the icing on the cake.
What I can say about the language is this: it had a long way to go during the time period I was using it (2011-2012). But dispite it'sflaws, it was unique and nice to use, when it worked.
LIME is a superset of Java, and at the time I used it, it was heavily embedded into Eclipse -- there was no LIME outside of Eclipse. But the language embraced a unique idea: write a single program, and run parts of it on different hardware platforms. You would write a single program in LIME, and the compiler would automatically migrate different parts ofthe program to either the host CPU, or attached GPUs or FPGAs, based on where it thought the best performance would be for that task. Programs are defined in terms of a task graph -- basically a state machine that data flows through, and any of the nodes that are purely functional and highly parallel in nature will be selected for execution on the FPGA or GPU, for example. An amazing concept that I think will be the driving force for computing, moving forward.
Working with the language and its designers at IBM was a nice reciprocal exchange -- I would push the language until it broke, send off an email,and get a new build with those bugs fixed a few days later, rinse and repeat. And thinking of how to implement things like the fast Fourier transform in new ways was exhilirating and frustrating in equal measure. But it really opened me up to new programming paradigms, and the possibility that certain implementations of so-called naive algorithms can perform much faster than their optimized versions can, on highly parallel hardware like an FPGA.
All in all, it was a great experience.
Here's a link to IBM's page on LIME: