__/ [ M ] on Sunday 14 May 2006 08:54 \__
> Roy Schestowitz wrote:
>
>> __/ [ M ] on Saturday 13 May 2006 20:28 \__
>>
>>> Tim Smith wrote:
>>>
>>>> In article <1147506449.314246.192970@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>, sonu
>>>> wrote:
>>>>> i wrote like
>>>>> main()
>>>>> {
>>>>> guint64 x=0xffffffffffffffff
>>>>>
>>>>> printf("%15d",x);
>>>>> }
>>>>>
>>>>> But it is showing warning integer constant is too large for "long"
>>>>> type
>>>>
>>>> First, next time paste in your exact code. The code you gave above is
>>>> missing a semicolon, and so will give more errors than what you listed.
>>>>
>>>> Second, next time show the exact and complete error message. If you had
>>>> shown:
>>>>
>>>> a.c:3: warning: integer constant is too large for "long" type
>>>>
>>>> instead of just paraphrasing it, then the other three people who have
>>>> already tried to help you would have realized the error is about the
>>>> assignment line (line 3), and not the printf line. You've wasted their
>>>> time and yours because they are focusing on the possible error in the
>>>> printf format you are using, which has nothing to do with your error you
>>>> are actually asking about.
>>>>
>>>> (That's not to say the printf line is without error, so do keep the
>>>> other responses in mind after you get past the error you were asking
>>>> about, as they will be helpful with the error you are going to then run
>>>> into).
>>>>
>>>> Now, on to the problem. You need to tell it that your constant is
>>>> supposed
>>>> to be 64-bits. How you do this depends on the compiler, I believe, as
>>>> this
>>>> is not part of standard C. For this particular compiler, try sticking a
>>>> suffix of "LL" on the constant:
>>>>
>>>> guint64 x = 0xffffffffffffffffLL;
>>>>
>>>> I have not been able to directly test this, because my gcc does not
>>>> recognize guint64. However, if I do this:
>>>>
>>>> long long x = 0xffffffffffffffff;
>>>>
>>>> I get the same error you got, and if I do this:
>>>>
>>>> long long x = 0xffffffffffffffffLL;
>>>>
>>>> the error goes away.
>>>>
>>>> (And if guint64 is a typedef that is coming from some header file you
>>>> are including, you should have included the #include in the code you
>>>> posted).
>>>>
>>>
>>> I haven't got any access to 64 bit machines *yet*, so I haven't needed to
>>> do anything like that. However I will keep that one in my back pocket :-)
>>>
>>> Wounder if they will change the suffix if and when we ever go to 128 bit
>>> machines.
>>
>> ,----[ Quote ]
>> | System/370, made by IBM, is possibly considered the first rudimentary
>> | 128-bit computer as it used 128-bit floating point registers. Most
>> | modern CPUs such as the Pentium and PowerPC have 128-bit vector
>> | registers used to store several smaller numbers, such as 4 32-bit
>> | floating-point numbers. A single instruction can operate on all these
>> | values in parallel (SIMD). They are 128-bit processors in the sense that
>> | they have registers 128 bits wide?they load and store memory in units of
>> | 128 bits?but they do not operate on single numbers that are 128 binary
>> | digits in length.
>> `----
>>
>> Source: http://en.wikipedia.org/wiki/128-bit
>>
>> Can't wait 'till the kilobit processor. Imagine the compexity of the chip
>> and the compiler...
>
> You saying that reminded me of articles and other things I have seen which
> suggests that there is a limit as to how fast you can make a chip go using
> silicon as the raw material.
>
> Here is an interesting discussion.
>
>
http://episteme.arstechnica.com/groupee/forums/a/tpc/f/77909585/m/7930911925
>
> With Intel coming up with a 'dual core' may be the future is parallel
> processing rather than clock speed and larger registers.
This tickles a few spots.
Firstly, with wider throughput (in this case the number of bits and bus
'density'), you will be able to reduce the clock speed and achieve the
same thing. The register complexity, on the other hand, could lead to
melting...
As regards architectures, I was never fond of multi-processor machines.
They are expensive and, while there are advantages to centralisation,
distributability is okay as well. Here at the Division we have a machine
with 60 processors, if I recall correctly. However, it requires many
re-writing of the code. I am not sure it's always worth the investment,
but I see others do it.
Since my undergraduate days, I have been toying with this idea of more
affordable computing, which takes available of idle time in standard
computing. Given a piece of code that is tailored for the purpose, you can
get your data processed without awaiting queues (computational servers).
It has served me extremely well since. With Windows, such ideas would seem
laughable as they are unmanageable. The individual machines are also
unreliable and unstable, where that is a pre-requisite.
Anyway, as I write this post in a streams of consciousness type of way,
what it boils down to is the point that you can buy low-end processors for
much cheaper than one Holy Grail. Then, you need to be able to distribute
the workload among the different units. The cost is physical space.
Dual-, triple- or quad-core? No, thank you. 64-bit? Nice, but rarely a
necessity. Until it reaches the point where production lines and
competition make it cheap...
Best wishes,
Roy
--
Roy S. Schestowitz - GNU/Linux: Because a PC is a terrible thing to waste
http://Schestowitz.com | Free as in Free Beer ¦ PGP-Key: 0x74572E8E
9:15am up 16 days 16:12, 11 users, load average: 0.34, 0.62, 0.54
http://iuron.com - semantic engine to gather information
|
|