Open Positions and Collaborations

We are always looking for bright and enthusiastic researchers to join our team. We have funding opportunities for PhD studies and visits/internships both at the University of Edinburgh ( and the University of Glasgow ( Please feel free to contact us if you are interested.

Past events


ISPASS-2018 tutorial

April 2, 2018 @ Belfast, Northern Ireland, United Kingdom
We brought Lift to ISPASS with this tutorial titled "Lift — performance portable code generation on parallel accelerators". We went through the fundamental components of Lift: language primitives and their properties, Lift IR, type system, parallelism exploitation and memory management. We dived deep into rewrite rules and the arithmetic expression simplifier behind Lift; we covered the existing applications of the language along with some practical examples for those interested to try them out.
Read More

Source Code

The Lift source code is available at
Lift is open source software released under the permissive MIT license.


Research directions

Linear Algebra

Starting from a single high-level program, our compiler automatically generates highly optimized and specialized matrix multiplication implementations. We group simple rewrite rules into more complex macro-rules, each describing a well-known optimization like tiling and register blocking in a composable way. Using an exploration strategy our compiler automatically generates 50,000 OpenCL kernels, each providing a differently optimized — but provably correct — implementation of matrix multiplication.

Machine Learning

Making neural networks (NNs) performance-portable using Lift: implementing generic and NN-specific optimizations as rewrite rules for efficient hardware utilization; introducing traditional NN building blocks such as conv, norm and fully_connected for seamless integration of Lift with popular machine learning libraries.

Optimising Reductions and Scans

Using Lift to create a high-level programming environment for Heterogeneous Computation, freeing the programmer from the burden of having to write complex device-specific code. This makes it possible to automatically identify parallelism in non-associative reductions and scans, hence enabling generation of efficient parallel implementations for GPUs.

Sparse Data Parallelism

We demonstrate that high level programming and high performance GPU execution for sparse, irregular problems are not mutually exclusive. Our insight is that this can be achieved by capturing sparsity and irregularity friendly implementations within the target space of a pattern-oriented, high-level compilation and transformation system. By working in a language rather than a library, we benefit from the ability to generate implementations by program-specific composition of building blocks which capture detailed, low-level implementation choices.

3D Wave Modelling

Simplified room acoustics simulations have been thoroughly investigated in Lift and a number of other 2D and 3D benchmarks have also been implemented. Current work also involves developing and formalising stencil optimisations for 3D codes, in particular 2.5D tiling, as well as ground penetrating radar algorithms. How to best abstract out absorbing boundary conditions needs to be investigated in more detail and primitives to accommodate these conditions need to be designed and added. Finally, a stencil-based DSL needs to be extended to compile into the Lift language.

Stencil Computations

Stencil computations are used in a wide range of applications from physical simulations to machine learning. Optimizing and tuning them for parallel hardware remains challenging. Lift is a new approach to achieving performance portability based on a small set of reusable parallel primitives. Its key novelty is encoding of optimization as a system of rewrite rules which are used to explore the optimization space. We extend Lift with support for stencil computations by adding a small number of primitives together with a few rewrite rules to achieve performance portability for stencil computations.

High-Level Synthesis

FPGAs are highly energy-efficient and offer a lot of flexibility to implement an application. With Lift, we want to exploit the FPGA's characteristics to lift the performance and energy-efficiency of neural network tasks. Starting from a high-level functional specification, the Lift compiler applies rewrite rules to explore different implementations and finally generates a low-level, optimised hardware design for FPGAs.

Read More

Meet the Lifters

Lift Alumni


This project is partially supported by: