OUR VISION

Rapid advances in the development of ever-more-complex machine learning algorithms and an explosion in the use of machine learning to introduce AI into all walks of life is massively increasing demand for compute capacity and the energy required to power it. Currently, data centers are the fourth largest consumer of power behind only China, the US and the EU, while performance and power consumption remain significant hurdles to the adoption of machine learning at the edge. Growth is unsustainable using traditional software-only solutions.

Our vision is to produce machine learning inference solutions which meet these performance demands using minimum compute capacity and minimum energy, thus advancing the vast potential of machine learning to enhance our lives without costing the planet.

OUR APPROACH

Only by understanding machine learning model behavior, efficiencies in model computation and optimal design of computation hardware, can we generate truly optimized solutions. We advocate algorithm-accelerator co-design to create world class solutions. We view machine learning optimization as a three-piece problem. We believe machine learning models must be designed paying close attention to their quantization, sparsity and compression, with a clear understanding of the hardware cost of implementing them on the various platforms available. For more on this, see our white paper.

Whilst hardware acceleration can significantly improve performance and energy consumption, machine learning models change so fast that hardware optimized for current models can quickly become outdated and relatively inefficient. Hence, we believe that re-programmable silicon will be the heart of optimized inferencing in the future. We’re in good company with this view; every server Microsoft deploys into Azure data centers contains the re-programmable silicon we program for exactly this reason. New FPGAs and data center boards from Xilinx and Intel allow optimized data structures to be continually redesigned and deployed, allowing new machine learning models to be optimized for performance and power consumption.

Our ability to massively reduce the hardware and energy costs of implementing machine learning inference has enabled us to drive machine learning to the edge, where these costs become critical. As a global edge workload benchmark owner with MLPerf we have the insight to produce optimised solutions for multiple embedded applications on FPGAs or even ASICs.

Myrtle abstracts the hardware design to enable software engineers to harness reconfigurable technology in machine learning, mapping their algorithms onto a mixture of compute resources and achieving previously impossible levels of performance, low energy consumption and execution scenarios. This is the future of the machine learning inference in the cloud and at the edge.

To find out how working with Myrtle can help you achieve a competitive advantage in your business, please contact us today.

NEWS

Myrtle releases optimized speech inference solution for the Intel® FPGA PAC D5005 platform

Reduces costs and removes growth constraints for businesses offering speech services

Read More

Myrtle.ai selected to provide artificial intelligence benchmark code for internationally recognised competition

MLPerf selects myrtle.ai to provide benchmark code for Speech To Text

Read More

KEY STAFF

Peter Baldwin

CEO

Peter is known for his data center software. He directly produced and supported simulation visual effects on twenty major Hollywood movies. He has a maths PhD and a special interest in the mathematical foundations of deep learning.

Liz Corrigan

Senior Engineering Manager

Liz runs myrtle’s engineering operations. Liz has been running teams to create FPGA based products and systems for over 15 years. She has developed state of the art mobile telecommunications equipment, shipped defence systems into theatre and led verification activities for security critical applications. Liz is a Chartered Engineer with a technical background in RTL design, verification and electronic systems design.

Brian Tyler

Commercial Director

Brian has held C-level roles in management, sales & marketing at several international software and hardware companies.

Amy Murphy

Office Manager & Directors’ PA

Amy has over 20 years of experience in business administration, including many years within a leading international consulting firm. As PA to the CEO, Commercial Director and Senior Engineering Manager, Amy uses her project management qualification and experience to ensure operational efficiency in the delivery of the company’s key goals and provide support for the Board. Amy also runs the Myrtle office, coordinating events, meetings, functions, recruitment and all other areas of the business.

Graham Hazel

GPU Lead

Graham’s team accelerates our machine learning training and rapidly prototypes FPGA designs. Prior to joining Myrtle, Graham worked at ARM semiconductors.

Ollie Bunting

FPGA Lead

Ollie leads our FPGA group and has extensive experience in embedded systems including high speed packet inspection and high grade cryptography.

Christiaan Baaij

Lead Architect

ENIAC award winner. Senior functional developer on Myrtle’s neural net hardware team.

Jonathan Shipton

Software Lead

Cambridge University educated computer scientist and expert on functional programming and its applications to low level systems.

Sam Davis

Technical Lead - Machine Learning

Sam leads our ML engineering effort into new model topologies and scalable training. Sam is currently the chair of the Speech Working Group for the MLPerf.org benchmarking consortium and he is also the owner of the official MLPerf transcription repository which is used to benchmark all inference and training hardware for this category.

Ian Ferguson

West Coast Evangelist

Before joining us, San Francisco based Ian was the VP of WorldWide Marketing and Strategic Alliances for a global fabless semi-conductor company. There he defined, articulated and executed delivery of disruptive hardware and software technology to Cloud Infrastructure companies and Enterprises.

Contact Us

If you’re interested in the efficient deployment of AI-based services using inferencing on DNNs, we’re interested in hearing from you.

+44 1223 967248
hello@myrtle.hayandrice.dev

Trusted By

One of ten global mlperf benchmark owners