Why char c = 3; compiles, but float f = 1.3; error out?
char c = 3; byte b = 300; short s = 300; int i = 30000; are assignments to integers(signed or unsigned) Compiler will do a range checking to see if those value are in the range of the variable, if it is, ok; otherwise, error out! float f = 1.3; float f = 1.3f; double d = 1.3; are assignments to floating numbers. The difference between 1.3d and 1.3f is the precision, not the range. 1.3 is default to double precision number, assigning it to a float will lose precision. It will need a cast! 1.3 is different than 1.30 in physics/science, since different precision implication. Distance d = 1.3 miles implies d is between 1.25 and 1.35 miles Distance d = 1.30 miles implies d is between 1.295 and 1.305 miles, which is a lot more precise measurement. Of course, not everybody follows the convention.