Skip to main content

What does enable bitcode do in Xcode

Background:

Now days compilation process for any language is divided into two parts and same is applicable for objective c.
  • Frontend Compiler (Clang)
  • Backend Compiler (LLVM)

Frontend Compiler (Clang): Responsibility of front-end compiler is to take a source code and convert into intermediate representation (IR).  In case of clang it is LLVM IR. 

Backend Compiler(LLVM): Responsibility of backend compiler is to take a IR as input and convert into object code. LLVM input is bitstream of LLVM IR (Bitcode) and output is sequence of machine instruction(Object code). Each cpu or processor has different set of Machine instruction, So LLVM output is CPU dependent or it can be executed on specific CPU only. 



There may be question in your mind that 
1) What is the need to divide into these phases?
2) What is LLVM IR? Can we see the LLVM IR as Output?

What is the need to divide into these phases?

It is beneficial for both the programming language designer and hardware manufacture. If you create a new language you just need to implement the front end compiler and you can use the already existing backend phase. In this way you do not have to take overhead of code optimization, object code generation for different architecture.  On the other hand if you create a new cpu chip then also you just need to implement the backend compiler for the same.

What is LLVM IR? Can we see the LLVM IR as Output?
It is an intermediate Representation and more readable then object code. You can easily compile a program into LLVM IR by using clang.
I have written the following objective-c code in a file:

- (void) LLVMIRTest:(NSUInteger)testCount{
    NSLog(@"%lu", testCount);
}
Run following command:
clang -S -emit-llvm LLVMIRTest.m
and following IR Code is generated by clang:
define internal void @"\01-[LLVMIRTest LLVMIRTest:]"(%0*, i8*, i64) #0 {
  %4 = alloca %0*, align 8
  %5 = alloca i8*, align 8
  %6 = alloca i64, align 8
  store %0* %0, %0** %4, align 8
  store i8* %1, i8** %5, align 8
  store i64 %2, i64* %6, align 8
  %7 = load i64, i64* %6, align 8
  notail call void (i8*, ...) @NSLog(i8* bitcast (%struct.__NSConstantString_tag* @_unnamed_cfstring_ to i8*), i64 %7)
  ret void
}

declare void @NSLog(i8*, ...) #1

First two parameter are default parameter and i64 is the function parameter I declare (NSUInteger). So all the parameter value store in the register and then NSLog method call. So you can see it is much more readable then object code.

What happen if I enable Bitcode in my application:

Let suppose you are developing an application for iPhone. You are targeting all the devices starting from iPhone 4s. Now you have multiple architecture to support (armv7, armv7s, arm64).
It may be the case all the architecture have different instruction set.

Now Question is:
1) Who will handle and generate these instructions for your application based on different architecture?
2) How application will package all the instructions?

Who will handle and generate these instructions for your application based on different architecture?

Thanks to LLVM for taking this responsibility. Wait where is Clang!. It might be busy in generating LLVM IR from your source code so that LLVM take as input and generate a object or machine instruction.

How application will package all the instructions?

If you have not enable bit code in your application then application will generate a fat binary which contain object code for all the architectures you are supporting. So if a user download iPhone 5s application then he/she will also have code (instructions) supporting  armv7 and armv7s. So it is unnecessary increasing the disk space of your application.

Oh No !!!!! Can I do something to avoid this?
Yes, Simply enable bitcode and rest is done by the compiler and app distribution process.

Enable Bitcode:

If you enable a bitcode in your application then Xcode will upload LLVM IR to iTunes Connect. After that all the optimization and final build is prepare by iTunes Connect based on the device.  Now in this case when user install your app he/she will install only for the specific architecture. 

OK, I Should Enable that in my applictaion because:

1) It reduces the storage space for my application (app thinning). 
2) I can take advantage of any better optimization  introduced by LLVM immediately without uploading a new binary. 

Anything I Need To worry:

Everything looks nice so no need to worry,  but I think the LLVM IR code is easily readable then machine code. It might be security issue if someone develop a library and he/she want to distribute the library with enable bitcode then other person can pick the library and dig through the bitcode. So it will be security issue while using option enable bitcode.

Hey I have one more question here can I know what optimization is applied on my code when I upload on iTunes Connect ?
No, you can't.

That's weird let suppose LLVM applied some optimization and my code breaks after that what should I do?
Yes that is valid point and you cannot do anything in that case. Basically you can't reproduce the actual output of your application at local machine.

Hmmm! Any other point ?
Yes,  Inline assembly is not allowed if you enable bitcode in your application. Sometime you want to write direct assembly code for performance purpose, but now if you enable bitcode you will stuck.

Pre-check to enable Bitcode:

1) If you are enabling bitcode on your main target then all the child target should have bit code enabled but vice versa is not true. So in your project any library, framework, child target(extension) may have bit code enable but main target has false.
2) For watch OS apps bitcode must be enabled.

What Next?

You can create a sample project and try to enable bitcode with various scenario. like enabling only on single framework.  

More Detail on Bitcode:




Comments

Popular posts from this blog

Asynchronous Request with NSOperationQueue

Today post is about how to run a asynchronous task in NSOperationQueue.  Generally we do not run a Asynchronous task in NSOperationQueue. It is also not recommended for any programmer to do that. This post is only for learning purpose what will happen if we schedule a asynchronous task in a queue and how can we complete that task:). So let us move to the learning: NSOperationQueue: In iOS NSOperationQueue is a class, which provide a way to perform operation concurrently. We also have others way to perform concurrent operation: 1) GCD 2) NSThread 3) pThread NSOperationQueue is a wrapper on GCD, which provides a very convenient way to execute operation concurrently. To create a Queue for Operation you have to simply allocate a object of the class: NSOperationQueue * opertionQueue = [[ NSOperationQueue alloc ] init ]; For this post let suppose you are making a queue to handle all Http request in your application. So i want to create a queue in Handler class

Shake Effect in iOS

Animation Animation always capture the user attention. We can use animation to update things on the screen.  For developer also animations fascinated things to learn and implement. Today we will try shake effect with Various  API. CABasicAnimation: Here we animate view's frame y coordinate from one position to another position. It is simple example of changing the position with respect to time. CABasic Animation is deal with the single keyframe.                                        y(t) = y 0 + t*d(y) You can use CABasic Animation if you have to play with single value. You need to move object from  one position to another without any intermediate steps. CABasicAnimation * shakeAnimation = [CABasicAnimation animationWithKeyPath: @ "position" ]; shakeAnimation . duration = 0.05 ; shakeAnimation . autoreverses = YES; shakeAnimation . repeatCount = 6 ; CGPoint shakeFromPoint = CGPointMake( self . shakeLabel . center . x,