Saturday, July 26, 2025

"INTs Aren't Integers and FLOATs aren't Real"

 

                            I was told this is a cat-submarine. Tail = Periscope. I believe it.
 


Over the past few weeks, I’ve been juggling two realities. On one side work: networks, and the daily exploration of security tasks in a banking environment. On the other, low-level code: free time, NASM, one instruction at a time.

 

I’ve been reviewing the basics—x86 syntax, memory layout, data definitions, etc etc etc. I'm following a series of videos, from a YouTuber that I appreciate, and although I found some information lacking or imprecise—particularly in the episode devoted to divisionit's still a good set of videos and I'd advise anyone to watch them here. Lately I've taken a gander at integers. How they’re stored, manipulated, compared, and what all those flags mean when you're moving bits around and trying to make sense of a program in GDB.

If you’ve spent more than a few hours in GDB (as I have, unreasonably so at times), you’ve probably done a CMP eax, ebx and then wondered what exactly happened to the flags. What’s the deal with CF, ZF, SF, and the rest of the alphabet soup? Why do certain jump instructions follow CMP, and not others?

So, quick refresher for the two family members that like me and still read this blog:

CMP just subtracts the second operand from the first, sets flags, and discards the result. The flags tell you the outcome, and then you pick the jump based on what you want to test.

InstructionMeaningFlags checked
JE / JZEqual (zero result)ZF = 1
JNE / JNZNot equalZF = 0
JL / JNGELess than (signed)SF != OF
JLE / JNGLess or equal (signed)ZF = 1 or SF != OF
JB / JCBelow (unsigned)CF = 1
JAAbove (unsigned)CF = 0 && ZF = 0

Notice those signed vs unsigned differences? JB isn’t "jump if smaller", it’s "jump if below"—unsigned. If you’re comparing signed ints, you should be using JL, JG, and so on.

Floats make it even more nuanced. UCOMISS xmm0, xmm1 for example, is how you compare scalar floats. That instruction sets flags similar to CMP, but works with IEEE 754 single-precision values, not integers. And yes, it’s aware of signs, NaNs, Infs, and the rest (of floating point hell).

Anyway, all that to say: this is kinda subtle, or at least it requires some study and care. IMO, totally worth learning. Most would disagree profoundly. I’ve been pushing myself to remember it, slowly but deliberately. You can check out some of the tiny experiments here. It’s not a project, more like a scratchpad that runs on opcodes and (lovely, quality) coffee.

 

                                                                 ...When in doubt, explain it to me. I'm anathema too.


 And you know what's fun? Mathematics! No, really.
I've seen it again and again: people treating floats as if they are basically the real numbers
(ℝ). They aren't!

Just take a look under the hood and you'll understand why.

"But OPQAM, I use Python/Java/Whatever. I don’t care about Assembly or floating-point registers!"

And that’s fair—until you try 0.1 + 0.2 and get 0.30000000000000004, or even 0.30000001192092895508. Then it might matter.

Can you think of a situation where such a small discrepancy could be a problem? I sure can. I can think of several, and some of them imply falling bridges.

Here’s the core issue: ints aren’t integers, and floats aren’t rationals.

 

An example

The decimal 0.1 becomes the binary: 0.0001100110011001100110011001100110011..., repeating infinitely.
But computers can’t store infinity. They cut off after some number of bits: ~23 bits for floats, ~52 bits for doubles. It’s like trying to store 1/3 in decimal — you can’t write 0.3333... forever. You round. You approximate. So do computers.

So, this means that you cannot represent 0.1 precisely.

 

The problem isn’t the mathematics — it’s representation.

How do systems and programmers deal with this?

  • Use decimal representations (decimal.Decimal) when exactness matters (e.g. money).

  • Use rational types that store fractions exactly (Fraction(1, 10)).

  • Use symbolic math when precision must be preserved throughout (SymPy, CAS software).

  • Or use fixed-point arithmetic — store cents instead of euros.

These are workarounds. The real solution? Know that floats are approximations, not truth.



                                                Accurate depiction of my two family members' faces as they read through this.


Meanwhile, back in the Real World, there was a CTF going on in the team. A colleague of mine created it and invited us all to some friendly competition for the next few weeks. Honestly? I will probably only do a couple CTFs before I turn my attention elsewhere. Having a family + hobbies + a day job does limit one's time. Still, I got to dip, and I managed to solve a challenge involving a simple—but satisfying—privilege escalation. I'm not going to give a lot of details, since it goes against the point of the 'contest' and it's actually requested by the site that we don't do it. But here's the 'trick': Classic PATH hijack. I dropped a fake ls binary into a writable dir (/tmp), positioned it early in $PATH, and executed the vulnerable binary which, instead of ls executed cat. Bang. Root shell. Dump flag. Walk away smiling.
This was, of course, allowed by purposefully using SETUID in the binary. A no-no. But there you have it. It was fun.

And yeah, sure—it’s not the most sophisticated vector ever, but the fact is, simple stuff works. You don't have to be fancy-schmancy to make something give you a 'win'. It is thus in Jiu Jitsu, and in illusionism. And so it is in hacking.

Also worth noting: a few days ago I got into a discussion with a colleague about TPM (Trusted Platform Module). I’ve blogged about the TPM issue here, but here’s the gist of it: I had to disable TPM on an old laptop (a very respectable ThinkPad x260) just to make the thing actually power off correctly. I tried everything—kernel parameters, ACPI tweaks, prayer (not really, but I totally could have!)—but nothing worked. Full shutdown always left the machine 'hot'.

Disabling TPM did the trick.

Why? Long story short: the TPM implementation on that hardware was tailored for Windows, and Linux support is... charitable at best. As for the discussion. That colleague warned me that in certain scenarios, like full power drains, I could end up with an unbootable machine if I lost TPM state. So I did what any sane people would do: I tested it again and again, simulating different scenarios.

 

Unplugged, drained, every scenario short of desoldering the CMOS battery. Result? Nothing broke. LUKS doesn’t need TPM. At least, not for the way I have this set up. In Linux, with this setting, TPM is optional unless you're deliberately tying encryption keys to it—and even then, tread carefully.

These tests were fun. They reminded me of the joy of breaking things on purpose, and the calm that comes with understanding exactly why something behaves the way it does.

So yeah, life’s been busy. I haven’t had much time to write (sentence never written by any amateur blogger ever). But I’m still here, still learning, and still hacking away.

Next up? I'll probably hit you up with some more ASM stuff as I keep on watching those videos and experimenting with stuff. Maybe some CTFs... who knows? Not me! And I'm right here. 

We’ll see.


No comments:

Post a Comment

"INTs Aren't Integers and FLOATs aren't Real"

                                     I was told this is a cat-submarine. Tail = Periscope. I believe it.   Over the past few weeks, I’ve be...