Is there any single figure of merit for Linux systems ?
No, thankfully nobody has yet come up with a Lhinuxstone (tm) measurement. And if there was one, it would not make much sense: Linux systems are used for many different tasks, from heavily loaded Web servers to graphics workstations for individual use. No single figure of merit can describe the performance of a Linux system under such different situations.
Then, how about a dozen figures summarizing the performance of diverse Linux systems ?
That would be the ideal situation. I would like to see that come true. Anybody volunteers for a Linux Benchmarking Project ? With a Web site and an on-line, complete, well-designed reports database ?
... BogoMips ... ?
BogoMips has nothing to do with the performance of your system. Check the BogoMips Mini-HOWTO.
What is the "best" benchmark for Linux ?
It all depends on which performance aspect of a Linux system one wants to measure. There are different benchmarks to measure the network (Ethernet sustained transfer rates), file server (NFS), disk I/O, FPU, integer, graphics, 3D, processor-memory bandwidth, CAD performance, transaction time, SQL performance, Web server performance, real-time performance, CD-ROM performance, Quake performance (!), etc ... AFAIK no bechmark suite exists for Linux that supports all these tests.
What is the fastest processor under Linux ?
Fastest at what task ? If one is heavily number-crunching oriented, a very high clock rate Alpha (600 MHz and going) should be faster than anything else, since Alphas have been designed for that kind of performance. If, on the other hand, one wants to put together a very fast news server, it is probable that the choice of a fast hard disk subsystem and lots of RAM will result in higher performance improvements than a change of processor, for the same amount of $.
Let me rephrase the last question, then: is there a processor that is fastest for general purpose applications ?
This is a tricky question but it takes a very simple answer: NO. One can always design a faster system even for general purpose applications, independent of the processor. Usually, all other things being equal, higher clock rates will result in higher performance systems (and more headaches too). Taking out an old 100 MHz Pentium from an (usually not) upgradable motherboard, and plugging in the 200 MHz version, one should feel the extra "hummph". Of course, with only 16 MBytes of RAM, the same investment would have been more wisely spent on extra SIMMs...
So clock rates influence the performance of a system ?
For most tasks except for NOP empty loops (BTW these get removed by modern optimizing compilers), an increase in clock rate will not give you a linear increase in performance. Very small processor intensive programs that will fit entirely in the primary cache inside the processor (the L1 cache, usually 8 or 16 K) will have a performance increase equivalent to the clock rate increase, but most "true" programs are much larger than that, have loops that do not fit in the L1 cache, share the L2 (external) cache with other processes, depend on external components and will give much smaller performance increases. This is because the L1 cache runs at the same clock rate as the processor, whereas most L2 caches and all other subsystems (DRAM, for example) will run asynchronously at lower clock rates.
OK, then, one last question on that matter: which is the processor with the best price/performance ratio for general purpose Linux use ?
Defining "general purpose Linux use" in not an easy thing ! For any particular application, there is always a processor with THE BEST price/performance ratio at any given time, but it changes rather frequently as manufacturers release new processors, so answering Processor XYZ running at n MHz would be a snapshot answer. However, the price of the processor is insignificant when compared to the price of the whole system one will be putting together. So, really, the question should be how can one maximize the price/performance ratio for a given system ? And the answer to that question depends heavily on the minimum performance requirements and/or maximum cost established for the configuration being considered. Sometimes, off-the-shelf hardware will not meet minimum performance requirements and expensive RISC systems will be the only alternative. For home use, I recommend a balanced, homogeneous system for overall performance (now go figure what I mean by balanced and homogeneous :-); the choice of a processor is an important decision , but no more than choosing hard disk type and capacity, amount of RAM, video card, etc...
What is a "significant" increase in performance ?
I would say that anything under 1% is not significant (could be described as "marginal"). We, humans, will hardly perceive the difference between two systems with a 5 % difference in response time. Of course some hard-core benchmarkers are not humans and will tell you that, when comparing systems with 65.9 and 66.5 performance indexes, the later is "definitely faster".
How do I obtain "significant" increases in performance at the lowest cost ?
Since most source code is available for Linux, careful examination and algorithmic redesign of key subroutines could yield order-of-magnitude increases in performance in some cases. If one is dealing with a commercial project and does not wish to delve deeply in C source code a Linux consultant should be called in. See the Consultants-HOWTO.
Закладки на сайте
Проследить за страницей
Created 1996-2021 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру