Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Overclocking with Gentoo
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Fri Feb 25, 2005 7:11 am    Post subject: Overclocking with Gentoo Reply with quote

This is quite a controversial thing to write a doc on, so I'm lining my cable modem with asbestos to prepare for the flames that will ensue...

Don't report bugs if you're overclocking unless you can reproduce them after booting at stock speeds and doing an emerge -e world.

This howto is not on how to overclock your computer. I don't suggest you overclock your computer. This howto is about how it CAN be done in a manner where you'll be less likely to wreck something simply due to the peculiarities of our choice of operating system. No guarrantees. This doc assumes a good understanding of how to safely overclock your computer in the first place. If you haven't overclocked your computer before, the author suggests trying this with bootable CDs first as it is more foregiving. (knoppix or your own livecd, or even the gentoo livecd with memtest86)

As far as I am concerned, the biggest drawback (besides not being able to play the latest windows games :roll: ) to a linux system over windows is: interactivity is poor. Overclocking your FSB can overcome this. I have been overclocking gentoo systems for a couple of years now, have wrecked lots of installs, and would like to share my experience and pointers so others may not have to do so many installs as I have to get it right.

1. Start from stock
Flash the latest bios's in your systemboard, video card, burners, scsi cards, etc and make sure the CPU is set to factory defaults. Increase different busses independantly and SLOWLY. Mount your file system read only the first time you attempt a new setting or use a livecd , even if it seems safer (some systemboards do not have completely functional bus clock locks and might use innacurate dividers instead: you might end up overclocking say the IDE bus accidentally by decreasing the FSB past a certain threshhold. Any new decent overclocking motherboard has a bus clock lock, most OEM boards don't.

****NOTE: If you tweak something and things aren't working as normal, DON'T do a nice shutdown. Hit the hard reset button. The reasoning is this: if the IDE interface isn't working properly, or the buffer is full of junk, writing to the disk is a BAD idea. Most overclocked gentoo installs fail because of disk corruption. ****

2. Set your FSB as high as it will stably go without overclocking the IDE bus. You can determine it's limit by booting off a gentoo livecd with the memtest86 boot option repeatedly after making minor increases each time.
2.1 There are usually dividers every 33MHz, some newer motherboards allow much more granular adjustments. The IDE bus MUST NOT be overclocked or disk corruption will ensue.
2.2 Interactivity in a gentoo system is normally not memory bandwidth limited, especially on AMD, but memory latency limited, so picking a FSB of 166 will give you a faster feeling system than at FSB 168, even though the CPU is faster and it probably will benchmark faster simply because the memory access time is reduced when you pick a 'round' FSB.

3. Bump your voltages once you've found a safe limit and the system is aged for 2-4 weeks. This applies to burned in CPUs only, obviously. Giving an extra .05V here and there will afford a stability safety net if you have the cooling for it. DON'T bump the voltages and then try to squeeze a little more out of the CPU until it's burned in at the new voltage for at least 2 weeks. This applies to the initial burn in also, which should be done at default voltage or less. Only use the voltage adjustment to make up the stability you lost after increasing the clock speed, not for any wild overclocks.
3.1 Burning in a chip works like this: each 'transistor' in your CPU is not to be though of as 1 junction, but as a field of junctions which are redundant (at the atomic level). By burning in the CPU, you actually burn out the weaker links, preventing them from leaking at higher than stock clock speeds. This is beneficial, but must be done over a long period of time. Be aware that the weaker links are the ones which operated at lower voltages, so if you've worked up to a high enough voltage, the chip may not POST at default voltage after time. You're not burning out transistors. When the dopants are sputtered onto the silicone crystal, it isn't perfect. When you burn in the chip, you burn out the little spatters of dopants that 'missed', leaving the bulk of the junction fully functional.
3.2 Here's an analogy: Think of yourself flicking a lightswitch on and off repeatedly. The rate at which you flick the switch is analogous to the clock speed of a chip. The force with which you flick the switch is analogous to the voltage you're giving the chip. The switches on your CPU that are able to be snapped open with a lot of force will be able to operate at a higher frequency more reliably because they snap open and shut a lot quicker. Thinking back to 3.1, burning in a CPU basically destroys the switches that move too slowly. You don't need the slow ones as they are superfluous; there are always faster ones in the same junction (up to a point). Once enough switches are destroyed, the CPU will cease to function, but this normally won't happen unless the temperatures go too high.
3.3 The relationship between processor longevity and operating temperature is very non-linear. The relationship between processor longevity and operating voltage at a fixed temperature is quite linear. If you lower the temps sufficiently, you will see long life from your processor. If one is cooling a processor with water or refridgerant, the voltage probably won't need to go up, as the little switches are less restricted at lower temperatures.

4. Keep it cool. If anything under your hood is running over 45*C, I wouldn't overclock it further. That's just me. Get a giant heatsink or watercooling if you want a stable system. CPUs physically operate more quickely at lower temperatures. Manufacturers have designed their CPUs so that they can operate at pretty liberal operating temperatures at their rated speeds. If one drops the temp to a more reasonable level, the 'little switches' are less restricted, so they are able to switch faster.

5. Don't be too pushey with RAM timings. Anything except CAS and RCD doesn't really make a big difference using gentoo and if you go too far, you are risking instant death of any mounted rw filesystem without much warning due to linux's extensive disk cacheing. If you want to tweak these anyways, try them out in memtest86 on a livecd for a good 24 hours before you try them in UT or DOOM :) On some older or OEM systems, adjusting the ram timings involves a hex editor and flashing your BIOS and is definately not recommended.

6. Make sure it will still chug hard before you build any packages. Before using an untested but very overclocked system, I ALWAYS at least inflate a stage3 tarball to somewhere on the disk and do an emerge -e world inside it to make sure it is stable. Often, I will boot off a livecd and do this on a spare HD 'just in case.' Memtest86 works ok also, but it isn't as sure as the spare HD method as the typical load your system experiences during operation is not running memtest86.

7. If you're getting segfaults or ICE's during compiling, back off the multiplier by 1 (don't use .5 multipliers with linux as the memory performance hit overwhelms any CPU performance boost that .5x gave you) and try again. If that doesn't work, put the multiplier back up and try giving the RAM and chipset another voltage bump. Failing this, back off the FSB.

8. If you're getting segfaults running other programs and just dropping the clock doesn't fix it, then you might have to rebuild your toolchain after attempting the fixes listed in 7. and rebuild the offending package.


Last edited by thebigslide on Fri Mar 11, 2005 10:32 am; edited 3 times in total
Back to top
View user's profile Send private message
hardcore
l33t
l33t


Joined: 01 Nov 2003
Posts: 626
Location: MSU, MI

PostPosted: Fri Feb 25, 2005 8:47 am    Post subject: Reply with quote

I've been overclocking for years now. It used to be a "black art" but now is somewhat commonplace. And when things become popular with the general masses, certain inconsistencies and false truths come about.

Overclocking the FSB is actually quite easy now, with most motherboards these days, raising the FSB is as detrimental as raising the multiplier. Most MB's have PCI/AGP locks, so you don't have to use memory dividers anymore.

1.) Clearly start from stock, if your system is new, test out all your hardware to make sure it works beforehand. My suggestion is to use a livecd, this way, you can't possibly bork your system.
2.) I agree with setting your FSB high, but only with Intel systems. Amd systems benefit more from low latency RAM timings as well as memory that is synchronous with the FSB. If your MB has PCI/AGP locks, make sure they're set at 33 and 66MHz respectively.
2.2.) Interactivity is actually mostly affected by your kernel scheduler, I recommend a kernel with the staircase scheduler.
3.) Once you have found your max overclock, (I usually use multipliers to give a rough estimate, then up the FSB so that I get around the max overclock), reduce your settings 1-5%. Do NOT up the voltage more than needed, just reduce the speed by 1-5%. Increased voltage leads to premature CPU death.
3.1) 'Burning in' your CPU does NOT do anything, there is no proof that supports 'cpu burn ins'. What does happen is your CPU thermal paste takes a few on/off thermal cycles to fully set. Once set, it performs better, ususally by decreasing temps, and increasing possible overclocks. I recommend running Prime95 to detect if your CPU is stable as it is very sensitive to numerical errors.
3.2) Again burn-in's do nothing but allow the CPU paste (especially the arctic silver's and the like) to settle.

4.) Keeping things cool is generally a good idea, however CPU's have a high thermal threshold, generally ~90 degrees C. As long as your rig performs stably at these temps, you're alright. But be advised, if your CPU has a high temp, your case usually has a high temp as well, and other components besides the CPU are sensitive to heat (hard drive, etc).

5.) Use the memtest86 boot disk to determine the lowest memory timings you can achieve (especially for AMD chips). Intel chips don't really get much of a boost from this.

6.) Again test with memtest86, testing using all the tests availiable, and 1-2 instances of prime95 running for at least a week using a LiveCD to assure you're stable, so two weeks total time of testing. This will ensure that when you do build your gentoo system OR go back to your gentoo system, everything will be peachy.

Ensuring everything is stable during testing has ensured that I've never had any hardware related problems with my rigs. I encourage you to test thoroughly as well.
_________________
Nothing can stop me now, cuz I just don't care.
Back to top
View user's profile Send private message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Fri Feb 25, 2005 9:27 am    Post subject: Reply with quote

I'd like to iterate that I've been overclocking chips for years also, just only recently with gentoo. I have succesfully booted a 486 at over 200MHz. My current system runs at over twice it's default speed and it's never crashed on me. I haven't bought a single piece of hardware in the past 2 years besides disk drives and sound cards that I HAVEN'T overclocked. This includes replacing components on motherboards, video cards, and SCSI cards with a soldering iron and a steady hand.

Quote:
Do NOT up the voltage more than needed, just reduce the speed by 1-5%. Increased voltage leads to premature CPU death.
Only with inadequate cooling or a chip that isn't burned in. Multipliers factor in even on boards with working AGP/PCI locks. For there to be communication between the busses with the least latency possible, the busses be running at frequencies that are common multiples of some number. The larger that number, the lower the transaction latency. If your FSB and RAM are running at 141MHz, and the PCI bus is at 33MHz, reads from the PCI bus will be out of sync with the FSB and there will be quite a bit of latency on the PCI bus for example. If the system bus is running at 140MHz, the FSB transactions will always occur just ahead of the PCI transactions, thus decreasing latency. This becomes a big factor in the overall speed of your gigabit network card/pci graphics card/SCSI card, etc.

Quote:
3.1) 'Burning in' your CPU does NOT do anything, there is no proof that supports 'cpu burn ins'. What does happen is your CPU thermal paste takes a few on/off thermal cycles to fully set.
BS. My explaination came straight from a computer engineer. I have a chip that won't POST at stock voltage because it's been gradually burned in at a much higher voltage. In fact, this is very common with older celerons and Thoroughbred-B athlon XPs.

Quote:
4.) Keeping things cool is generally a good idea, however CPU's have a high thermal threshold, generally ~90 degrees C. As long as your rig performs stably at these temps, you're alright. But be advised, if your CPU has a high temp, your case usually has a high temp as well, and other components besides the CPU are sensitive to heat (hard drive, etc).
90 degrees? Sure it will run, but not overclocked, and not gentoo. That's why I wrote this howto; because gentoo is a little more finnickey. At even 70*C, if the system is being pushed, half the data coming from the CPU will be errors and things will b0rk.

as for 5, I don't recommend pushing the RAM timings too far as this can lead to spontaneous disk corruption. The reasoning here is that if your CPU starts to falter, the system will simply not work. If the RAM starts to falter, you will have massive disk corruption. If you are dead set on cranking down your RAM timings, let me offer a command that will do the same thing: dd if=/dev/random of=/dev/hda (Don't actually do this) BTDT. I realized after retarring my system the 8th or 9th time that it just wasn't worth it for the 2-3% that bumping the RAM timings (besides CAS latency which is a good 5% from 3.0 to 2.0) gives you.
Back to top
View user's profile Send private message
hardcore
l33t
l33t


Joined: 01 Nov 2003
Posts: 626
Location: MSU, MI

PostPosted: Fri Feb 25, 2005 12:02 pm    Post subject: Reply with quote

Well first off, straight from an Intel Engineer. The engineer you talked to was probably referring to the "burn in ovens" mentioned in the below article. Burn in's are only the result of CPU thermal paste settling, nothing more, nothing less.

Quote:
There is no factual basis for any method that could cause a CPU to speed up after being run at an elevated voltage for an extended period of time. There may be some effect that people are seeing at the system level, but I'm not aware of what it could be. I do know, however, that several years ago when I was motivated I asked for and looked at the burn-in reports for frequency degradation for approximately 25,000 200MHz Pentium CPU's, and approximately 18,000 Pentium II (Deschutes) CPU's and that, with practically no expections at all, they all got slower after coming out of burn-in by a substanial percentage.

To me there is no doubt in my mind that suggesting that users overvoltage their CPU's to "burn them in" is a bad thing. I'd liken it to an electrical form of homeopathy - except that ingesting water when you are sick is not going to harm you and overvoltaging a CPU for prolonged periods of time definitely does harm the chip. People can do what they want with their machines that they have bought - as long as the aware that what they are doing is not helping and is probably harming their systems. I have seen people - even people who know computers well - saying that they have seen their systems run faster after "burning it in" but whatever effect they may or may not be seeing, it's not caused by the CPU running faster.


Patrick Mahoney
Senior Design Engineer
Enterprise Processor Division
Intel Corp.


http://forums.extremeoverclocking.com/archive/index.php/t-35376.html

Second, like I said, use a LiveCD. There is no possibility for disk corruption unless you write to disk, or your hard drive sets on fire. And like I said about the CPU, your CPU can survive ~90 C, it won't run stably, but I've had CPU's that run at 55-60 C @ load, that are rock solid, each CPU varies, just fore everyone to keep that in mind. Also, you can't take the MB temp's @ face value, unless you have an infrared tempurature reader, you won't have anything close to an acurate tempurature. So you can safely ignore most temps, again as long as everything remains stable.

Thurd, you may have overclocked a 486 to 200 MHz, but that don't mean jack, if it's 100% stable that's a different story. I've had my 2500+(1833MHz) @ 2900MHz Posting on Air cooling, but it sure as hell doesn't mean it's stable.

Fourth, RAM timings do make a difference on AMD systems, especially the Athlon64 line. With the integrated memory controller on die, you can think of system memory as a HUGE L3 cache, and the lower the latency, the faster that 'L3 cache' will run. As long as you test everything to be 99.99999% stable, it doesn't matter the speed, as long as it is stable.
_________________
Nothing can stop me now, cuz I just don't care.
Back to top
View user's profile Send private message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Fri Feb 25, 2005 8:04 pm    Post subject: Reply with quote

hardcore, I'm not trying to shut you down or anything here, but some of the methods you describe may have worked well for you, but they are not something I'd recommend people try. This howto isn't meant for people who know already know how to successfully overclock their systems. I'd like to help people who haven't had success in overclocking with Gentoo and what you're describing isn't a conservative approach. Escpecially in messing with RAM timings. If you read my first response to that topic, if the RAM starts to falter (as it certainly can), not having tweaked the timings right out gives you an extra buffer of safety. Tighter RAM timings might give you slightly faster gaming and encoding benchies, but that's about it. It's not worth the risk IMHO, for an extra 5fps in doom3 and 20 seconds off encoding a movie. If a system can't be left unattended and trusted to stay up, I won't recommend that setup to anyone.

Secondly, initial burn in is done at LOW voltage, the voltage slowly increased with clockspeed. This is what I've stated above: Don't overvolt anything until it's had time to burn in. The intel engineer you've quoted is clearly talking about people that pump up the voltage right out of the box, which surely is a dumb thing to do. Also, note the line at the bottom where is says Patrick Mahoney was NOT speaking on behalf of Intel Corp. Interestingly, isn't this the sort of propaganda you'd expect from Intel anyways? The engineer I am talking to is a friend and he was not talking about any burn-in ovens. He also has chips that will fail to POST at default voltages. If you were working for intel and you saw that as a result of people burning in chips, what would YOU tell the press?

If you read some of the RAM reviews on anandtech for example, where they show some of the fastest memory benchmarks I've seen using CAS3 and higher on AMD64. The only timing that makes a big difference on AMD64 is command rate, which should be 1T. These same results are shown by numerous other hardware review sites.

No, the 486 I overclocked was dead stable running at -60 degrees or so using a chilled pelletier for cooling. (-40*C winters are awesome for some things) The RAM, however, was not and started smoking shortly after power on as I had a voltage leak somewhere. I went through several sticks of RAM to get it to load a kernel succesfully. Never did get a picture of the /proc/cpuinfo. I was using turbolinux on that box.
Back to top
View user's profile Send private message
pwhitt
Tux's lil' helper
Tux's lil' helper


Joined: 25 Nov 2003
Posts: 85

PostPosted: Sat Feb 26, 2005 2:48 pm    Post subject: Reply with quote

Quote:
burning in a CPU basically destroys the switches that move too slowly. You don't need the slow ones as they are superfluous; there are always faster ones in the same junction (up to a point). Once enough switches are destroyed, the CPU will cease to function, but this normally won't happen unless the temperatures go too high.


my remarks are specific to the original posts points 3.1&2
if i had said something like that, my old profs would rise from their graves, beat down my door, and kill me in my sleep.

"burning in" a chip in this way does absolutely nothing but potential harm. thebigslide, i think what your friend is talking about is related to junction capacitance and the lower rise time needed for transistors to "switch" quickly. for a transistor to switch, you need a dV/dt at the gate, running from "off" to "on." if you're flipping too quickly and the chip isn't made to be that fast, the apparent voltage at the gate will be reduced by capacitance in the junctions leading to it. the result is missing clocks on various transistors and faulty logic, as things are no longer synchronised. what the gate sees is a voltage that runs from "off' to "kinda more than off" and it just sits there. if however you crank up the voltage, the resulting gate voltages will look like they are going from "off" to "on" again (the dt remains the same, but now dV increases).

when there are redundant transistors in a gate, they are there for a reason. applying too great a potential on the gates of some transistors may very well remove them from the set, but that will in no way help the other transistors that now have to pick up the slack by carrying more current and contend with the increased voltage as well. that is why once a chip is damaged, it gets hotter faster. a crude way to look at it is this, when they are damaged, resistance increases and heat from passing current increases proportional to i^2*R. more heat=more damage, more damage=more heat... ad infinitum. when you start to damage a chip in this way, it doesn't matter how well you remove the heat - there will be no way to keep up and the chip will litterally burn.

i feel compelled to say this for the kids at home, if you were to, as you say, "destroy the switches that move too slowly" you'd break the chip. the chip is made to be the way it is, you never want to intentionally damage anything, ever. armies of engineers spend a lot of time on each part of something that complicated. they do not intend for monkies at home to start hammering on it to "fix it." if what you are saying is true, then we could make chips the size of our leg, badly designed with many redundant parts, then burn 'em right into 9GHz monsters. that is not how it works at all.
Back to top
View user's profile Send private message
Merlin-TC
l33t
l33t


Joined: 16 May 2003
Posts: 603
Location: Germany

PostPosted: Mon Feb 28, 2005 9:05 am    Post subject: Reply with quote

I also have to agree that so called burnins are just hypes.
Technically speaking it is pretty stupid to do that. Maybe you can compare it with smoking ;)
What is for sure is that you shorten the lifetime of your CPU considerably.

Also changing the timings of your RAM is not more or less dangerous than overclocking your CPU.
If you clock your CPU too high you also can get random crashes and corruptions.

That's why using a life CD is a very good idea because no data will be destroyed and you can make sure it works.
Back to top
View user's profile Send private message
hardcore
l33t
l33t


Joined: 01 Nov 2003
Posts: 626
Location: MSU, MI

PostPosted: Mon Feb 28, 2005 10:20 am    Post subject: Reply with quote

Merlin-TC wrote:
I also have to agree that so called burnins are just hypes.
Technically speaking it is pretty stupid to do that. Maybe you can compare it with smoking ;)
What is for sure is that you shorten the lifetime of your CPU considerably.

Also changing the timings of your RAM is not more or less dangerous than overclocking your CPU.
If you clock your CPU too high you also can get random crashes and corruptions.

That's why using a life CD is a very good idea because no data will be destroyed and you can make sure it works.


Precisely my point, you can use a livecd to get your max stable overclocks (CPU, FSB, RAM Hz, RAM timings, etc) with 100% or more load, then back them all off by 1-5%, and you are assured a stable system. Well until July rolls around, then you gotta break out the AC ;)
_________________
Nothing can stop me now, cuz I just don't care.
Back to top
View user's profile Send private message
penquissciguy
n00b
n00b


Joined: 28 Feb 2004
Posts: 19

PostPosted: Wed Mar 02, 2005 2:26 pm    Post subject: Reply with quote

IMO, the best way to overclock is to use processors that are intentionally undervolted by the manufacturer to run at lower power levels, like mobile Bartons and LV Xeons. That way, "increasing voltage" only brings the processor back up to stock voltage or slightly higher. My dually has Xeons that technically are running at a 75% overclock at "stock" p4 voltages that will run at full tilt all day long.

Ken
Back to top
View user's profile Send private message
Shienarier
Apprentice
Apprentice


Joined: 16 Jun 2003
Posts: 278

PostPosted: Sat Mar 05, 2005 3:57 pm    Post subject: Reply with quote

I have an AMD Athlon XP 1700+ cpu and i was thinking about starting to overklock it.
Then what am i supposed to do?

1) Increase the FSB, boot into memtest and run that for a while, then raise FSB some more until i crash,
then lover the FSB to the last number that worked. If so, how much should i raise the FSB at a time?
2) Then raise the cpu multiplier 0.5 at a time in a simular fashion as the FSB?
Back to top
View user's profile Send private message
Gentree
Watchman
Watchman


Joined: 01 Jul 2003
Posts: 5350
Location: France, Old Europe

PostPosted: Sat Mar 05, 2005 9:59 pm    Post subject: Reply with quote

Well if this guide is intended as a basic guide for debutant overclockers lets not forget to explain the basics.

Dont do anything until you have working temperature sensors. ACPI in kernel and emerge acpi. Search forum for details on getting it working.

Either monitor with repeated acpi -t commands or with gkrellm . I suggest the first since all of this is best in the more controlled env. of the command line console.

Also if you have PC health type section in the BIOS set up a CPU shut-off temp and a warning temp about 5C lower. These will bail you out if you are not in full control.

As said above , be wary of what some sensors put out . It may just be a thermistor "somewhere near" the cpu's underbelly.



Dont forget that in this area each system and each individual processor is individual, that's what you are trying benefit from. Dont think "xxx posted that his cpu was fine a 3.2GHz and I have the same", you dont.


Since it is usually CPU temperature that stops the fun, the biggest o/c gain you will ever get will probably come from good , solid-copper heatsink, so consider the moderate investment. I bought my mobo with a huge Aerocool ali heatsink and fan that made more noise than a 737 taxiing up for take-off. I replaced it with CoolerMaster copper sink with a quieter fan and lost about 8 degrees.

A similar improvement later came from adding an 80mm NoiseBreaker S2 on an adapter in place of the 60mm CPU fans. This knocked another 6 degrees at the same time as making the machine almose silent.



Getting back to the software. Memtest86 is a must (the more recent memtest86+ has _lower_ version numbers since it is a different project.)

As soon as you get out of the BIOS, boot to a CD or floppy with memtest86+ and give your memory a thorough thrashing. This can be just 5 mins or so when trying new values but once you are settling on some choises at least 30 mins. This again is not rigourous. You will need to give it several hours soak test later.


Well there's more to system stability than RAM . Next I recommend the cpuburn suite (in portage and on several rescue/boot CD so this can be done from CD at first for safety although make sure you have temp monitoring available also.)

This suite contains serveral very small progs that will push you system harder that it will ever get used in reailty. Harder than things like prime95 as well.

There are several progs in the suite which push the cpu, the mmx subsystem and the mobo IO circuitry. This should show up possible weaknesses in other areas than just cpu/ram. On this system it was always burnBX that tripped out first. Read the doc.

One more tip before you start fiddling , make sure you know what to do when you go too far and the Bios won't boot the system any more. This is what is meant by it failing to POST.

Some systems will detect multiple failures to start from of power-off situation and reset the BIOS to safe bootable values, some will need you to reset the BIOS. RTFM.

In any case a pencil and paper is always an invaluable toolkit even in new millenium! Jot your key BIOS settings before it happens and keep a log of all your tests as you go.


After that I would take the approach layed out by hardcore above, testing each step with the tools I suggested.


With the caveat that each system is different in this game , FWIW here's what I did to this system.

CPU: Athlon-xp 1800+ with 128k L1 256k L2
Mobo: ABIT kx7-333

FSB 176 (abs max 182)
divider 5:2:1
multiplier 12.5

cpu @2230MHz (cf 1667 stock)

idle temp 47C (with reduces CPU fan speed.)
burnK7 temp 60C (=alarm temp)

Vcore 1.625 (cf 1.60)

The RAM would not take any tweeking.


HTH


8)
_________________
Linux, because I'd rather own a free OS than steal one that's not worth paying for.
Gentoo because I'm a masochist
AthlonXP-M on A7N8X. Portage ~x86
Back to top
View user's profile Send private message
Gentree
Watchman
Watchman


Joined: 01 Jul 2003
Posts: 5350
Location: France, Old Europe

PostPosted: Fri Mar 11, 2005 8:24 am    Post subject: Reply with quote

Further note on more mechanical side of increasing your o/c

I have just knocked a healthy 8 degrees C off my cpu temp under full burnK7 workload.

I polished my heatsink !

Well dont polish the fins , what I did was to lap the underside to remove the machining marks. I thought it might just help a little .... maybe. I was gob-smacked.

Get a solid peice of optical glass. A decent quaility mirror is usually very flat.

Lay a peice of fine grade (wet or dry) emery paper on the glass and lap the base of the heat sink to remove all marks. If it is a bit too shiney afterwards , give it a quick circular rub on a fresh bit of paper to depolish it . Shine is not good for heat disappation.

Even good quaility heatsinks are not finished to this level , this difference it can sometimes make is surprising.

8)
_________________
Linux, because I'd rather own a free OS than steal one that's not worth paying for.
Gentoo because I'm a masochist
AthlonXP-M on A7N8X. Portage ~x86
Back to top
View user's profile Send private message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Fri Mar 11, 2005 10:25 am    Post subject: Reply with quote

A mirror is what I use, too :) It makes it easy to see if there's surface imperfections. If you go to a hobby shop, you can get grit for rock tumblers up to about 8000 grit. If you mix some in a little oil, you can make the heatsink even and reflective enough to shave in. Watch the edges, tho as they will cutcha.
Back to top
View user's profile Send private message
Gentree
Watchman
Watchman


Joined: 01 Jul 2003
Posts: 5350
Location: France, Old Europe

PostPosted: Fri Mar 11, 2005 5:22 pm    Post subject: Reply with quote

Grit on glass is not so good because it eats the glass as well and pretty soon it's no good as a reference surface.

I lightly oil the back of the paper to make it stay flat , this minimises the rounding of the edges due to paper lift. Since the chip is well away from the edges this is not a real prob.

BTW/ Same technique works nicely to m/c head and barrel surfaces 8)
_________________
Linux, because I'd rather own a free OS than steal one that's not worth paying for.
Gentoo because I'm a masochist
AthlonXP-M on A7N8X. Portage ~x86
Back to top
View user's profile Send private message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Sat Mar 12, 2005 5:34 am    Post subject: Reply with quote

You're right. I forgot that I'd used a jig to position everthing. Still. When you get into the x000 grits, you're not removing much material.
Back to top
View user's profile Send private message
Hara
Apprentice
Apprentice


Joined: 25 Jan 2004
Posts: 162

PostPosted: Tue May 24, 2005 7:51 pm    Post subject: Reply with quote

I feel like adding my two cents worth,

There are usually two types of overclockers, the one who tries to save money buy overclocking a chip that is rated lower than it can actually handle, and the other who tries to maximize performance completely to have the fastest computer possible. Depending on your goals and critieria would vastly determine you're overclocking capabilities and requirements.

Unless you are going for top of the line (although research is important here too), you need to understand that most of the Overclocking work is actually done BEFORE you buy your components. When someone setups up a system, its important to know not only how fast it goes, but how fast it can go and at what cost to get there. For instance, if you ever tried to overclock a palamino amd xp 1800, you'd find its very difficult to get any faster than what you have. But, if you tried to overclock a barton xp 1800, you'd find yourself being able to overclock to maybe 2.4 ghz with no sweat. Your overclocking ability of your particular hardware is more determined by the manufacturers than by the person at home building the machine. So when someone thinks that just because you have a phase change sub zero really expensive cooler, does not mean you'll be able to overclock well. You'd be much better off purchasing better hardware with the same money.
_________________
Hara

(Mandrake->Slackware->LFS(Never Finished)->Gentoo)
Back to top
View user's profile Send private message
ScOut3R
Tux's lil' helper
Tux's lil' helper


Joined: 29 Apr 2005
Posts: 116
Location: Australia

PostPosted: Tue Sep 20, 2005 9:23 pm    Post subject: Reply with quote

Hey!

I like to overclock my pc too, but i cant do it with Gentoo. Before Gentoo i had slackware and i couldve run my cpu at 2500+@3600+ without the slightest problem. Under Gentoo i have serious problems. Okay, i can't compile while overclocked, i can accept that. But! The system only boots when i use a minimal overclock (2500+@3200+). I can use it fairly this way, but if the cpu load is at 100% constant the system crashes sometimes (like switching between X and console). If i go higher i can't even boot. Its kinda disturbing for me, 'cause i like to use boinc.
I use minimal CFLAGS (-march -fomit-frame-pointer -pipe) to compile my system. I'm seriously thinking about getting back to Slackware.
Back to top
View user's profile Send private message
Gentree
Watchman
Watchman


Joined: 01 Jul 2003
Posts: 5350
Location: France, Old Europe

PostPosted: Thu Sep 22, 2005 9:28 am    Post subject: Reply with quote

So go back to Slackware or read the copious detail in this thread and start setting up your overclocking methodically and in a tested manner. It might just work on Gentoo as well.

8)
_________________
Linux, because I'd rather own a free OS than steal one that's not worth paying for.
Gentoo because I'm a masochist
AthlonXP-M on A7N8X. Portage ~x86
Back to top
View user's profile Send private message
ScOut3R
Tux's lil' helper
Tux's lil' helper


Joined: 29 Apr 2005
Posts: 116
Location: Australia

PostPosted: Thu Sep 22, 2005 8:36 pm    Post subject: Reply with quote

Gentree wrote:
So go back to Slackware or read the copious detail in this thread and start setting up your overclocking methodically and in a tested manner. It might just work on Gentoo as well.

8)


I used the same settings as with Slackware or any other OS. I'm thinking that it might be the higly optimized installation? I mean Slackware consists of i486 packages while my Gentoo system uses all the instruction sets. Could this be the problem? If yes then i'm gonna stay with Gentoo, 'cause it's much better than any other OS i have ever seen. 8)
Sorry for the hard manner i used in my previous message, it was too late and i was too harrased.
Back to top
View user's profile Send private message
erikm
l33t
l33t


Joined: 08 Feb 2005
Posts: 634

PostPosted: Thu Sep 22, 2005 9:02 pm    Post subject: Reply with quote

Since this seems to quite the watering hole for OC aficionados, I'd like to ask a question: I moderately OC some of my chips, mainly to recover the slight 'margin' that is built in by default, that is, I'd put myself in the former category defined by Hara.

I recently had a rather heated discussion with someone I thought to be an authority on the subject, who claimed that even the slightest overclock would completely destroy a source based OS like Gentoo, since OC'ing would make the CPU miss instructions every now and then, and thus not compile binaries correctly.

My stance was, that as long as you run a stable (as in benchmarks / memtest86 for 48 hours, error free) system, your chip can take the OC without producing faulty code.

What do you think? Is a moderate overclock ok in the long run, or is he right?
Back to top
View user's profile Send private message
oggialli
Guru
Guru


Joined: 27 May 2004
Posts: 389
Location: Finland, near Tampere

PostPosted: Fri Sep 23, 2005 12:35 pm    Post subject: Reply with quote

I just can't help the urge to inform people about how that first post is almost 100% BS and should not be taken as any advice. Ask anyone a bit more familiar with OC'ing and its effects and he'll point out tons of false information in that text.
_________________
IBM Thinkpad T42P - Gentoo Linux
Back to top
View user's profile Send private message
Hara
Apprentice
Apprentice


Joined: 25 Jan 2004
Posts: 162

PostPosted: Sat Sep 24, 2005 5:48 am    Post subject: Reply with quote

Quote:

I recently had a rather heated discussion with someone I thought to be an authority on the subject, who claimed that even the slightest overclock would completely destroy a source based OS like Gentoo, since OC'ing would make the CPU miss instructions every now and then, and thus not compile binaries correctly.

My stance was, that as long as you run a stable (as in benchmarks / memtest86 for 48 hours, error free) system, your chip can take the OC without producing faulty code.

What do you think? Is a moderate overclock ok in the long run, or is he right?


All computers have the risk of developing an error. It's an inherent property of silicon based electronics. There are probabilities that have to deal with the chemistries and electron flow of the media (think alignment of the planets) that are simply unavoidable. With servers, they usually have a mtbf, or mean time between failure, rating thats usually measured in months to years.

So even running a benchmark on a normal computer for several years would probably lead to an eventual computational error. The question is whether overclocking significantly decreases the mtbf. Overclocking, however, does not affect the chemistries of the computer chips, but rather decreases the time the processor has between clock cycles. These type of errors have a different cause. Instability due to overclocking is caused by the processor not having enough time to do the work its supposed too. (Heat has the added effect of slowing down electrical signals which affects the limit a processor can be pushed) The types of errors I was talking about earlier would only increase because more processing work is done over time and would do so at such an unsignificant level it would take more time to test than the life span of the processor.

Being a source based distribution, we are ALL more prone to less stability because we do more processing work to create our programs (rather than just running a decompression algorithm). So all of us have to deal with the possibility of a failed compile. Usually all that is required is a recompile. In reality, a stable and conservative overclock has no noticable effect on compiling.

Basically, a mild overclock should be stable for your needs. If you really needed reliability, you'd have multiple computers computing the same exact thing and cross referencing data to ensure error free results. That type of roboustness is usually saved for life-support critical devices like NASA would use.
_________________
Hara

(Mandrake->Slackware->LFS(Never Finished)->Gentoo)
Back to top
View user's profile Send private message
saFFyre
n00b
n00b


Joined: 22 May 2004
Posts: 22

PostPosted: Wed Oct 05, 2005 12:09 am    Post subject: Reply with quote

Also remember many devices that we buy such as CPU/GPU are identical to much faster models. When they are produce they are speed binned, which means that if X amount of a batch fail to achieve the desired results they will be knocked down a grade(s). Many of these chips however will function fine at the higher speed. I run an amd 3000 64 venice chip happily at 2.2ghz (1.8 stock) with stock volts this is amd 3800 speeds. I would never consider a system to be stable without rigourous stress testing, usually 48 hours of prime stress testing and about 20 passes of memtest 86. If your system can pass rigourous tests like this i do not see any reason why a gentoo system would have problems. With modern motherboards and sensible approach and adequate cooling, overclocking is really very safe and a very good way of saving yourself some money.
Back to top
View user's profile Send private message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Fri Oct 14, 2005 9:59 am    Post subject: Reply with quote

oggialli wrote:
I just can't help the urge to inform people about how that first post is almost 100% BS and should not be taken as any advice. Ask anyone a bit more familiar with OC'ing and its effects and he'll point out tons of false information in that text.


specifically?...
Back to top
View user's profile Send private message
oggialli
Guru
Guru


Joined: 27 May 2004
Posts: 389
Location: Finland, near Tampere

PostPosted: Fri Oct 14, 2005 1:49 pm    Post subject: Reply with quote

Well lets see. Hardcore mentioned some of this already I see.

Interactivity worse than Windows - depends greatly on your system setup, especially CPU sched, but, not the point. It's not only FSB which matters here but the overall clockrate - FSB has no special effect on interactivity.

Motherboards have working AGP/PCI locks or they don't, I haven't heard of a single model that would have locks up to a certain FSB and past that would use divided FSB - and, there probably isn't anything that stupid, since the separate clock & synchronization circuits are already there, why not use them?

2.0) What does memtest have to do with IDE ? Nothing. It actually shows you how far your memory and memory controller can go, but FSB can usually be raised further if so desired (like on Athlon64).

2.1) IDE bus (actually PCI, where the clock comes from) can usually be safely overclocked quite a lot (from 33 to ~40 or beyond) depending on the HD model (at least a few MHz will never cause any problems). Of course there is no benefit, but yes, with older motherboards that don't have locks it's inevitable.

2.2) How do latencies go down by driving the memory slower? Not by itself, but actually they can, if you adjust your memory timings a little tighter at the same time (which is usually possible by turning the clocks down). But even then this shouldn't be linked to interactivity in any way, interactivity problems are of much larger scale than single memory accesses' latencies. And "round" FSBs don't help in itself. Actually, if you can keep the memory timings tight, running your memory/FSB faster will always help both latencies and throughput. There's one thing though; If you use an nforce2 based Socket A motherboard, always aim to keep your memory and FSB clock in synch - NF2 is for example faster by running both FSB and memory at 166 than by running 166/230 or alike, although the memory bandwith broadens considerably. On VIA and especially P4 platforms it isn't that much of a problem, and Athlon 64s with their on-die controllers are of course completely free of FSB/mem syncing issues.

3) Burn in... I wonder how long this urban myth will live. No one has ever actually perceived anything but DEGRADING of OC potential after a "burn-in" of components. Also, you don't need anything like an "introductional period" after choosing a new voltage, there is no problem pumping them where you want them straight away.
3.3) Voltage should always be bumped up after enhancing cooling if any benefit is wanted. Think of cooling efficiency as a factor that limits your voltage - you can administer more voltage if the chip runs at a lower temp.

4) Yes, cool is fine, but esp. with P4 systems one really can't keep the real temperature of the die when under load anywhere near 45'C (with air). Don't worry until you reach ~70 celcius or even more.

5) RAM timings are a lot more important than RAM clock rate, but again, this isn't anything specific to Gentoo, interactivity or even Linux in general. And command rate (1T vs 2T) makes a VERY big difference (like some 20% of pure memory clock).

6) Something like pifast/superpi/prime is a lot better measure of stability than compiling (and a lot quicker to notice instabilities too).

7) From where have you made up that .5 would hurt your memory performance ? There's no reason (and it won't). Otherwise, it's a good advice to try to keep the FSB up, but if you really need to back the multiplier by 1 (the CPU is that much beyond its limits at its current vcore), leaving the multiplier as is while backing FSB by a few percent and raising core voltage would likely achieve better results.
_________________
IBM Thinkpad T42P - Gentoo Linux
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum