CLIKA
Compressed Blog

Your trusted source of everything TinyAI

CLIKA Compressed Blog
Jane Han / CLIKA
Members Public

In My Biased Opinion Recap: Inference Optimization

This is a recap of one of the episodes from our video series, “In My Biased Opinion (IMBO).” To see the full details of this episode, visit the following link: here From static quantization to quantization-aware training, the quantization techniques to make your AI models lightweight for accelerated inference are

Corbin Foucart
Members Public

Embedding native C/C++ in Java without an IDE (Part II)

In the previous post, we discussed at length how to cross-compile and run C/C++ code natively on an Android operating system “natively” using the Android Studio NDK. We also alluded to the fact that while the Android operating system is built on an LTS Linux kernel and it’s

Google
Members Public

Meet the inaugural cohort of our Google for Startups Accelerator: AI First North America

April 09, 2024 Posted by Matt Ridenour, Head of Startup Developer Ecosystem - USA Startups are at the forefront of developing solutions for some of humanity's most pressing challenges by using AI, driving breakthroughs across industries from healthcare to cybersecurity. To help AI-focused startups scale quickly while building

Corbin Foucart
Members Public

Cross-compiling a C++ Library Using the Android NDK (Part I)

A common theme in the world of MLOps is the development of machine learning models on state-of-the-art cloud computing environments or desktop Linux computers with GPU capabilities and x86-64 hardware. However, we often do so with the intention of eventually using these models on edge devices such as smartphones, tablets,

Jane Han / Bar Rozenman
Members Public

Meet the Team: Bar Rozenman, Deep Learning Engineer

CLIKA’s mission is to make your AI-based solutions profitable and implementable on any type of hardware device. We achieve this by optimizing inference and making your AI model lightweight through our service offering: a toolkit that automates the engineering workflow from model compression to compiling for hardware deployment. Implementing

Bar Rozenman
Members Public

Fully Sharded Data Parallelism (FSDP)

In this blog we will explore Fully Sharded Data Parallelism (FSDP), which is a technique that allows for the training of large Neural Network models in a distributed manner efficiently. We’ll examine FSDP from a bird’s eye view and shed light on the underlying mechanisms. Why FSDP? When

Jane Han / CLIKA
Members Public

From C-LAB to VivaTech and the Samsung Developer Conference: CLIKA’s Journey with Samsung Electronics

About C-LAB and CLIKA  Earlier in February 2023, CLIKA was selected for C-LAB Outside, a startup acceleration program operated by Samsung Electronics. With the aim of extending the success of its in-house idea incubation program—C-Lab Inside—to support external startups, the initiative seeks to empower promising entrepreneurs beyond the

Jane Han
Members Public

Meet the Team: Corbin Foucart, Deep Learning Researcher

CLIKA’s mission is to connect AI software to various hardware around us. CLIKA's AI compression technique, developed in-house, dramatically reduces the size of AI models without sacrificing their performance, enabling businesses to quickly, economically, and reliably commercialize their AI applications on devices used in our daily lives.

Corbin Foucart
Members Public

Understanding the TFLite Quantization Approach (Part I)

Running machine learning models can be computationally expensive, and these models don’t typically port well to hardware or embedded devices. The TensorFlow Lite (TFL) library is, according to the documentation, “a mobile library for deploying models on mobile, microcontrollers and other edge devices.” TFL allows the conversion of native

Jane Han / Alessandro Mapelli
Members Public

Meet the Team: Alessandro Mapelli, the Chief Creative Officer

CLIKA’s mission is to connect AI software to various hardware devices around us. CLIKA's AI compression technique, developed in-house, dramatically reduces the size of AI models without sacrificing their performance, enabling businesses to quickly, economically and reliably commercialize their AI applications on devices used in our daily

Intel Ignite
Members Public

Intel Ignite Announces Startups Selected for Fall 2023 US Cohort

BOSTON, Oct. 2, 2023 – Intel® Ignite, Intel Corporation’s startup accelerator program for early-stage deep tech startups, announced the 10 companies selected for the third U.S. cohort. Through a rigorous process, and from an application pool that included more than 300 companies, 10 startups were selected for the 12-week

Kyle Wiggers
Members Public

Clika is Building a Platform to Make AI models Run Faster

Ben Asaf spent several years building dev infrastructure at Mobileye, the autonomous driving startup that Intel acquired in 2017, while working on methods to accelerate AI model training at Hebrew University. An expert in MLOps (machine learning operations) — tools for streamlining the process of taking AI models to production and