Sunday, 19 May 2013

Why would a compiled and linked high-level language execute slower than Assembly/machine code?

Why would a compiled and linked high-level language execute slower than Assembly/machine code?

Say I compile and link C in to a flat binary or some executable output format to run on a bare machine. If I optimized and fed the direct binary to the CPU upon boot up, why would the resulted format take more clock cycles from a compiled and linked C origin than just Assembly assembled? What I mean is that if the same instructions are fed and fetched from some binary format, regardless of the origin, if the resulting binary yields the same opcodes whether from C, D, Assembly or even direct written opcodes themself (if possible), why do programmers often say that Assembly will always be faster?
Sorry if not clear, but in general, shouldn't the same fetched opcodes take the same clock cycles and CPU resources regardless of the origin, if linked and/or compiled/assembled, if the binary file contains only the necessary instructions (and a linker script or output format handler can do this for C or such, it should be just as fast).

No comments:

Post a Comment