I did a presentation today at an ex employer of mine. It was a forerunner to my upcoming user group presentation on LINQ and when I introduced type inference I got thrown an interesting question that I couldn’t answer for sure. As a bit of background…
Type inference is a .Net language addition that allows the compiler to infer the type of a variable based on the data you assign it when declaring. It’s done using the var keyword, which will look pretty familiar to anybody who’s written something in a scripting language before. The difference in .Net is that the type inference is done at compile time, and so we get some level of type safety through the process.
Given that information – consider what would happen here…
var myInt = 1;
myInt = 2147483668; (int.MaxValue + 1)
The question was – does the compiler take into account the fact that we’re setting the value outside the range of an int at some point later down the track and therefore infer the type of myInt to be a 64 bit integer? My answer was initially no, that we’d get an error, but I wasn’t sure. The answer?
No. We get a compilation error that tells us that we can’t convert a 32 bit integer into a 64 bit integer.
So what happens if we want a 64 bit integer to store a large number at a later point? There are 2 answers here based on the way you assign the var at declaration.
1. var myInt = 2147483648; //int.MaxValue + 1
2. var myInt = 1U;
The first one is a bit of a magic number, so I don’t really like it. To best describe the second approach I’ll quote the MSDN entry on the uint type:
“When you use the suffix U or u, the literal type is determined to be either uint or ulong according to its size”
So there you have it. Type inference – and integers… They say you learn something new every day!