I made the same app 15 times, here are the results Part 1 — Introduction & Methodology

Ali Taha Dinçer on 2023-12-28

Mastering the Mobile Dev Maze: UIKit vs. SwiftUI vs. XML vs. Compose -Part 1

Compose Multiplatform Header (Gathered From Official CMP Github Page)

TLDR: I have implemented the same coin app 15 times using UIKit, SwiftUI, XML, Compose and Compose Multiplatform by changing various settings and comparing the FPS, memory usage and app sizes. Results show that you should either go full CMP (Compose Multiplatform) on your iOS app screen, or leave it entirely, as of January, 2024. For Android, it basically does not matter.

After spending many of those years tinkering with Flutter development, I found myself pretty excited about Kotlin Multiplatform (KMP) when I first came across it. The idea of using the same code across different platforms and still getting that native feel was a game-changer. I hopped on the KMP train early in 2023 and have been exploring every bit of it since. I’ve developed 2 apps and played around with some of the big libraries like Apollo GraphQL, Ktor, SQLDelight, Realm, Koin, and KMM-ViewModel. Having a couple of years of Android experience made diving into these libraries a breeze, and knowing my way around Kotlin was a big help. By the time KMP hit its stable release in December 2023, it felt like the perfect moment to get really serious about it.

JetBrains had this pitch for KMP: it’s flexible for developers. You can go all-in and share your entire business logic, or you can take it slow and integrate KMP parts into your existing business logic bit by bit. That’s just my take on what they said in their video. But JetBrains didn’t just stop with KMP; they had bigger plans. They wanted to challenge the likes of Flutter and React Native with something even better, more efficient, and more developer-friendly for the multi-platform scene. They were aiming to share not just the logic but the UI as well. And so, in August 2021, they rolled out Compose Multiplatform (CMP) in alpha, letting you share your UI code on the web, desktop, and Android. But what about iOS and the whole Apple ecosystem? Hold on… as of May 2023, CMP for iOS hit alpha too. That was a huge step in shaking up the multi-platform world.

But here’s the big question: what about the native performance issue? In my view, KMP does a stellar job when the UI stays native, and you’re just sharing business logic. But what’s the deal with the UI part? Can Compose really keep up with SwiftUI or UIKit in performance? I know CMP for iOS is just starting out and has a long way to go to reach stability. But what’s the situation right now? A bunch of Android developers have started using CMP in their apps, and some are even pushing them into production. But at the end of the day, as a user, all I care about is a smooth, hassle-free experience. Imagine having a top-tier phone like an iPhone 15 Pro Max or a Pixel and running into laggy UI. That would be a bummer, right? Most users would blame their phone rather than think the app isn’t optimized. We developers sometimes forget that our users don’t know or care about the tech behind our apps. They just want something stable that performs well and doesn’t kill their batteries. So, can CMP deliver that today?

With that thought, I decided to dig and benchmark CMP’s performance. I whipped up a simple app, which turned into 15 different versions, to gradually introduce CMP and see how it stacks up performance-wise. This series of articles will cover what I built and tested, including the methodology and criteria I used. I’ll wrap it up with my own takeaways and all the nifty solutions I figured out along the way.

Before we jump into the benchmarking adventure, check out my KMP projects from mid-2023. The second one’s still in the works, but I’ll keep you posted:

GitHub - Subfly/KMMeal Contribute to Subfly/KMMeal development by creating an account on GitHub.github.com

GitHub - Subfly/ricKMMorty: Inital Commit Inital Commit. Contribute to Subfly/ricKMMorty development by creating an account on GitHub.github.com

Finally, throughout reading the series, you can follow up the repo below, where I published all the apps along with libraries and resources:

GitHub - Subfly/the_compose_experiment: A repository that holds a continuous experimentation with… A repository that holds a continuous experimentation with Compose Multiplatform until it reaches stable - GitHub …github.com

Pre-thinking Possible Outcomes

Before diving into the nitty-gritty of this article series, let’s take a step back and consider what to expect. Starting a CMP project is straightforward with the Kotlin Multiplatform Wizard. Once you’ve created a project, downloaded it and you’re in, you’ll find a folder named ‘composeApp’. This is where all the Compose magic happens. In the ‘composeApp/src/commonMain/kotlin/’ directory, there’s a file named ‘App.kt’ which is essentially the starting point of our Compose UI. One key observation here is that the CMP imports are from ‘androidx.compose’, identical to what we use in Jetpack Compose for Android apps. This similarity is quite a revelation — it implies you’re using the same Jetpack Compose codebase in your multi-platform application.

With this in mind, it’s logical to expect similar performance, app size, and memory usage for the Android version, since it’s leveraging the same library. However, when it comes to iOS, we need to bear in mind that we should expect the unexpected. CMP is still in its alpha stage for iOS, so predicting outcomes here is tricky. Also, given that CMP for iOS uses SKIA (SKIKO) — much like Flutter did before introducing Impeller — there might be some shared challenges. These could include an increase in app size, as the IPA package includes the SKIA engine. Another critical aspect to consider is the garbage collector used by CMP, which is the Kotlin/Native GC. This means that for parts of the app where KMP and CMP come into play, Apple’s native garbage collection isn’t being used, potentially impacting memory management on iOS devices.

Methodology

Let’s see how I got about this. I’ll walk you through the names I’ve given to the apps, the libraries and architecture that were my go-to’s, and give you a peek into the code behind the FPS (Frames Per Second) and Memory Usage Measurers. I’ll also break down the tests I ran. So, let’s jump right in and see what’s under the hood!

Naming of the Apps

At the outset of this experiment, I initially planned to use both Retrofit and KTOR for Android app networking. However, as the project progressed, I decided to streamline the process by exclusively using KTOR for handling network requests. This decision was driven by the desire to minimize variables in the testing process. With that in mind,

In preparing the apps for this project, my primary aim was to minimize the number of libraries used. This approach was crucial because each library introduces a new variable, which could lead to differences in testing results from one app to another, affecting aspects like performance, memory usage, and app size.

Here’s the approach I took:

Truth be told, using the built-in performance inspectors in Android Studio or Xcode would have been the easiest way to measuring app performance. However, my ultra genius mega-mind decided to create a separate KMP library for that purpose. My idea was to have a tool that could be easily integrated into any of my apps, perhaps even future ones. However, integrating this library with the native SDKs turned out to be a bigger challenge than I anticipated. After three days of wrestling with it, and probably due to my own knowledge gaps, I just couldn’t get it to work. So, I resorted to the good old method of copying and pasting code into all projects.

My aim was not just to measure performance, but to do it in a way that avoided the clutter and complexity of standard tools. This gave me exactly what I needed. I wanted to create clear, straightforward data points that could be easily parsed into JSON and visualized in Python for easy-to-understand charts. In the end, building my own performance measurers turned out to be the simplest solution to what I needed, even though it brought its own set of challenges along the way.

FPS Measurer

Yes, that file doesn’t exist because measuring the app size is a straightforward process. It involves installing the app in release mode and checking the storage usage in the device settings.

Testing

At the beginning of these experiments, I set out to find a solid testing method that doesn’t rely on human error. Using an automated method was pretty logical and Maestro was the hottest thing in the X (Twitter) community. But, when I started testing Android with Maestro, the results got pretty complex and needed a lot of explaining to make sense to readers. Trying to draw clear conclusions from Maestro’s automated tests turned into a real head-scratcher, as you’ll see in the Experiments and Results section later on.

Then came iOS testing, and things got a bit tricky. It might have been SKIA or the fact that CMP code lacked some modifiers like .semantics() and .testTag(), but I hit some roadblocks. Maestro was unable to find my views when I used CMP in the iOS app. Additionally, in the "BaseiOSUIKit" app, TableView loading seemed to take forever, and the app started to use gigabytes of memory when I tried to connect to Maestro. So, I decided to try something different - the "User Scroll" method.

While the “User Scroll” approach is a bit more prone to human error, it gave us results that were not only easier to understand but also more revealing than Maestro. Here’s the drill: I scrolled rapidly 40 times in a minute and recorded the results. Most of the time, this quick-scroll test wrapped up in just 40 seconds, and I patiently waited for another 20 seconds (which is way more than needed) to let the garbage collector do its thing.

Testing X (Twitter) app in Maestro (Gathered from Maestro’s Front Page)

The Next Part

When I began writing this article, my intention was to compile everything on a single page. However, as the content grew, I found myself with over 9000 words, equivalent to nearly 35 minutes of reading time on Medium. In the interest of simplicity and to enhance readability, I’ve opted to split this article into three parts. The next installment will delve into the experiments, their outcomes, and offer straightforward explanations of the apps. You can explore these details in the upcoming section below:

I made the same app 15 times, here are the results Part 2— Experiments & Results The Previous Partmedium.com

I want to extend my gratitude to you for accompanying me on this journey. Feel free to share and utilize any part of this article series and project, as long as proper credits are attributed. I value your feedback and welcome any questions you may have. You can reach out to me at: