Background:
Now days compilation process for any language is divided into two parts and same is applicable for objective c.
First two parameter are default parameter and i64 is the function parameter I declare (NSUInteger). So all the parameter value store in the register and then NSLog method call. So you can see it is much more readable then object code.
- Frontend Compiler (Clang)
- Backend Compiler (LLVM)
Frontend Compiler (Clang): Responsibility of front-end compiler is to take a source code and convert into intermediate representation (IR). In case of clang it is LLVM IR.
Backend Compiler(LLVM): Responsibility of backend compiler is to take a IR as input and convert into object code. LLVM input is bitstream of LLVM IR (Bitcode) and output is sequence of machine instruction(Object code). Each cpu or processor has different set of Machine instruction, So LLVM output is CPU dependent or it can be executed on specific CPU only.
There may be question in your mind that
1) What is the need to divide into these phases?
2) What is LLVM IR? Can we see the LLVM IR as Output?
What is the need to divide into these phases?
It is beneficial for both the programming language designer and hardware manufacture. If you create a new language you just need to implement the front end compiler and you can use the already existing backend phase. In this way you do not have to take overhead of code optimization, object code generation for different architecture. On the other hand if you create a new cpu chip then also you just need to implement the backend compiler for the same.
What is LLVM IR? Can we see the LLVM IR as Output?
It is an intermediate Representation and more readable then object code. You can easily compile a program into LLVM IR by using clang.
I have written the following objective-c code in a file:
What is LLVM IR? Can we see the LLVM IR as Output?
It is an intermediate Representation and more readable then object code. You can easily compile a program into LLVM IR by using clang.
I have written the following objective-c code in a file:
- (void) LLVMIRTest:(NSUInteger)testCount{ NSLog(@"%lu", testCount); }
Run following command:
clang -S -emit-llvm LLVMIRTest.m
and following IR Code is generated by clang:
define internal void @"\01-[LLVMIRTest LLVMIRTest:]"(%0*, i8*, i64) #0 { %4 = alloca %0*, align 8 %5 = alloca i8*, align 8 %6 = alloca i64, align 8 store %0* %0, %0** %4, align 8 store i8* %1, i8** %5, align 8 store i64 %2, i64* %6, align 8 %7 = load i64, i64* %6, align 8 notail call void (i8*, ...) @NSLog(i8* bitcast (%struct.__NSConstantString_tag* @_unnamed_cfstring_ to i8*), i64 %7) ret void } declare void @NSLog(i8*, ...) #1
What happen if I enable Bitcode in my application:
Let suppose you are developing an application for iPhone. You are targeting all the devices starting from iPhone 4s. Now you have multiple architecture to support (armv7, armv7s, arm64).
It may be the case all the architecture have different instruction set.
Now Question is:
1) Who will handle and generate these instructions for your application based on different architecture?
2) How application will package all the instructions?
It may be the case all the architecture have different instruction set.
Now Question is:
1) Who will handle and generate these instructions for your application based on different architecture?
2) How application will package all the instructions?
Who will handle and generate these instructions for your application based on different architecture?
Thanks to LLVM for taking this responsibility. Wait where is Clang!. It might be busy in generating LLVM IR from your source code so that LLVM take as input and generate a object or machine instruction.
How application will package all the instructions?
If you have not enable bit code in your application then application will generate a fat binary which contain object code for all the architectures you are supporting. So if a user download iPhone 5s application then he/she will also have code (instructions) supporting armv7 and armv7s. So it is unnecessary increasing the disk space of your application.
Oh No !!!!! Can I do something to avoid this?
Yes, Simply enable bitcode and rest is done by the compiler and app distribution process.
Enable Bitcode:
If you enable a bitcode in your application then Xcode will upload LLVM IR to iTunes Connect. After that all the optimization and final build is prepare by iTunes Connect based on the device. Now in this case when user install your app he/she will install only for the specific architecture.
OK, I Should Enable that in my applictaion because:
1) It reduces the storage space for my application (app thinning).
2) I can take advantage of any better optimization introduced by LLVM immediately without uploading a new binary.
Anything I Need To worry:
Everything looks nice so no need to worry, but I think the LLVM IR code is easily readable then machine code. It might be security issue if someone develop a library and he/she want to distribute the library with enable bitcode then other person can pick the library and dig through the bitcode. So it will be security issue while using option enable bitcode.
Hey I have one more question here can I know what optimization is applied on my code when I upload on iTunes Connect ?
No, you can't.
That's weird let suppose LLVM applied some optimization and my code breaks after that what should I do?
Yes that is valid point and you cannot do anything in that case. Basically you can't reproduce the actual output of your application at local machine.
Hmmm! Any other point ?
Yes, Inline assembly is not allowed if you enable bitcode in your application. Sometime you want to write direct assembly code for performance purpose, but now if you enable bitcode you will stuck.
2) For watch OS apps bitcode must be enabled.
Hey I have one more question here can I know what optimization is applied on my code when I upload on iTunes Connect ?
No, you can't.
That's weird let suppose LLVM applied some optimization and my code breaks after that what should I do?
Yes that is valid point and you cannot do anything in that case. Basically you can't reproduce the actual output of your application at local machine.
Hmmm! Any other point ?
Yes, Inline assembly is not allowed if you enable bitcode in your application. Sometime you want to write direct assembly code for performance purpose, but now if you enable bitcode you will stuck.
Pre-check to enable Bitcode:
1) If you are enabling bitcode on your main target then all the child target should have bit code enabled but vice versa is not true. So in your project any library, framework, child target(extension) may have bit code enable but main target has false.2) For watch OS apps bitcode must be enabled.
What Next?
You can create a sample project and try to enable bitcode with various scenario. like enabling only on single framework.
Comments
Post a Comment