Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Overclocking with Gentoo
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
oggialli
Guru
Guru


Joined: 27 May 2004
Posts: 389
Location: Finland, near Tampere

PostPosted: Fri Oct 14, 2005 1:51 pm    Post subject: Reply with quote

And also, raising FSB usually is the only way to overclock newer AMD systems (multipliers upwards locked).
_________________
IBM Thinkpad T42P - Gentoo Linux
Back to top
View user's profile Send private message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Fri Oct 14, 2005 5:45 pm    Post subject: Reply with quote

Sorry for ripping into your here, but I'm kinda sick of being told I'm full of BS. I have a lot of experience here and am really just trying to help people who have been having problems overclocking. Not experienced overclockers who would rather find out by themselves. It also irks me when someone says what you're written is false in a derogatory manner because they've written it wrong.

oggialli wrote:
Interactivity worse than Windows - depends greatly on your system setup, especially CPU sched, but, not the point. It's not only FSB which matters here but the overall clockrate - FSB has no special effect on interactivity.
For the vast majority of systems, Windows (XP) shows better interactivity. This is largely due to how much RAM is used for preloading parts of the operating system before they are needed, making them available instantly instead of having to be loaded off of disk.

oggialli wrote:
Motherboards have working AGP/PCI locks or they don't, I haven't heard of a single model that would have locks up to a certain FSB and past that would use divided FSB - and, there probably isn't anything that stupid, since the separate clock & synchronization circuits are already there, why not use them?
This statement you're commenting on was geared towards people having problems because of a divider throwing some other bus out of sync. Most people don't realize when this happens.

oggialli wrote:
2.0) What does memtest have to do with IDE ? Nothing. It actually shows you how far your memory and memory controller can go, but FSB can usually be raised further if so desired (like on Athlon64).
a) I was referring to finding the limit of the FSB (which you'd almost always want synchronous with RAM. b) Athlon 64 doesn't have a FSB. It has a memory bus and a hypertransport bus. Raising the "FSB" control in BIOS while leaving the memory speed alone will only overclock (usually) the hypertransport bus and will rarely result in any tangible performance unless you're I/O limited to the video card or maybe a network or fiberchannel controller..

oggialli wrote:
2.1) IDE bus (actually PCI, where the clock comes from) can usually be safely overclocked quite a lot (from 33 to ~40 or beyond) depending on the HD model (at least a few MHz will never cause any problems). Of course there is no benefit, but yes, with older motherboards that don't have locks it's inevitable.
There is no benefit, and all it can cause is problems. Big problems if this bus is on the verge of stability and gives out while performing a disk write.

Quote:
2.2) How do latencies go down by driving the memory slower? Not by itself, but actually they can, if you adjust your memory timings a little tighter at the same time (which is usually possible by turning the clocks down). But even then this shouldn't be linked to interactivity in any way, interactivity problems are of much larger scale than single memory accesses' latencies. And "round" FSBs don't help in itself. Actually, if you can keep the memory timings tight, running your memory/FSB faster will always help both latencies and throughput. There's one thing though; If you use an nforce2 based Socket A motherboard, always aim to keep your memory and FSB clock in synch - NF2 is for example faster by running both FSB and memory at 166 than by running 166/230 or alike, although the memory bandwith broadens considerably. On VIA and especially P4 platforms it isn't that much of a problem, and Athlon 64s with their on-die controllers are of course completely free of FSB/mem syncing issues.
Think about the signaling. If one bus is clocked at 33MHz and another at 68MHz and data is going from one to the other, the signal has to wait an extra clock than if the second bus were clocked at 66MHz. This is oversimplified, but makes the point. This is WHY on an NF2 (or ANY platform that supports asynchronous memory clocking), you get better performance with the memory bus in sync, even if slower. It isn't much of a problem on Intel platforms for the same reason that increasing memory timings doesn't hurt performance too bad on an Intel platform.

oggialli wrote:
3) Burn in... I wonder how long this urban myth will live. No one has ever actually perceived anything but DEGRADING of OC potential after a "burn-in" of components. Also, you don't need anything like an "introductional period" after choosing a new voltage, there is no problem pumping them where you want them straight away.
I was giving a safer method, but do what you want. I wouldn't prescribe this to anyone, though.
oggialli wrote:
3.3) Voltage should always be bumped up after enhancing cooling if any benefit is wanted. Think of cooling efficiency as a factor that limits your voltage - you can administer more voltage if the chip runs at a lower temp.
True, although you don't need it. As temps go down, current flow also goes up. That's why people doing liquid nitrogen cooling usually stick to stock voltages, even for massive overclocks.

oggialli wrote:
4) Yes, cool is fine, but esp. with P4 systems one really can't keep the real temperature of the die when under load anywhere near 45'C (with air). Don't worry until you reach ~70 celcius or even more.
That's why I don't buy Intel system ;-)

oggialli wrote:
5) RAM timings are a lot more important than RAM clock rate, but again, this isn't anything specific to Gentoo, interactivity or even Linux in general. And command rate (1T vs 2T) makes a VERY big difference (like some 20% of pure memory clock).
That statement's validity really depends on what platform you're talking about. 1T vs 2T is a big performance modifies, but only applies to Athlon64. I have found that because of disk cacheing, if your RAM cacks out, your filesystem is usually farked. Totally and completely farked. Also, if you run some decent LINUX memory benchmarks, you'll find that RAM latency doesn't really do much except the two variables I've mentioned. Try hdparm with -T. You can also time the application of some filter to a large image with the gimp for another decent non-synthetic benchmark.

oggialli wrote:
6) Something like pifast/superpi/prime is a lot better measure of stability than compiling (and a lot quicker to notice instabilities too).
Using WINE? Also, I will tell you this. I have had systems spontaneously reboot when doing an emerge -e world that would run a synthetic processor benchmark for hours.

oggialli wrote:
7) From where have you made up that .5 would hurt your memory performance ? There's no reason (and it won't). Otherwise, it's a good advice to try to keep the FSB up, but if you really need to back the multiplier by 1 (the CPU is that much beyond its limits at its current vcore), leaving the multiplier as is while backing FSB by a few percent and raising core voltage would likely achieve better results.
It's actually pretty well known that half dividers hurt your performance. There's a good explaination (with benchmarks) on anandtech, and I won't repeat it, you can RTFM. Also, this is a TEST to see if the processor overclock is hurting your performance. I'm not saying to leave it there. If your system is dying on you, it really helps to find out what is causing that. Increasing your vcore won't necessarily prove anything. Since we don't all have scopes in our basement and signal analysers to connect to our mainboards, knocking back a clock is usually the easiest way to eliminate a variable from the overclocking equation when something's wrong.

Now, all I've provided here is some VERY conservative information for people who might have or have had issues and given up on overclocking linux boxes. Linux systems seem overall more sensitive to overclocking than windows does. That's all. Windows systems can usually run just fine on a 'less than stable' overclock until you put the system under major load. Linux boxes tend to cack right away. It can be frustrating. But what's more frustrating is when people say you're full of it just for trying to help others.
Back to top
View user's profile Send private message
thebigslide
l33t
l33t


Joined: 23 Dec 2004
Posts: 790
Location: under a car or on top of a keyboard

PostPosted: Fri Oct 14, 2005 5:46 pm    Post subject: Reply with quote

oggialli wrote:
And also, raising FSB usually is the only way to overclock newer AMD systems (multipliers upwards locked).
Which platform is that? Athlon XP is usually unlocked upwards (or can be easily enough) and Athlon 64 doesn't have a FSB.
Back to top
View user's profile Send private message
oggialli
Guru
Guru


Joined: 27 May 2004
Posts: 389
Location: Finland, near Tampere

PostPosted: Sun Oct 16, 2005 11:17 pm    Post subject: Reply with quote

HTT can be referred to as FSB for these purposes since it affects CPU clock rate just the same way. And yes, I was referring to A64's. Newer AXP's btw are usually locked both ways.

I didn`t say XP wouldn't have better interactivity - or that it would - and I'm not commenting on that this time either, it is not the point. That doesn't change the fact that drawing a tight line from FSB to interactivity is nonsense. And what crap are you trying to cover that up with ? "This is largely due to how much RAM is used for preloading parts of the operating system before they are needed, making them available instantly instead of having to be loaded off of disk." What the fuck is this ?

a) It doesn't have anything to do with the matter of discussion
b) It is nonsense (not exactly a surprise)
1) When you have started an app (not counting swapping) every part of the binary and associated libraries are fully in RAM - no loading off the disc anywhere here.
2) Interactivity problems have to do with bad CPU time / IO bandwith distribution scheduling, nothing else.

Are you sure you aren't referring with "interactivity" to "program startup times"? That would make at least one your statements somewhat true, but not related to OC'ing at all.

Dividers throwing "some other bus" out of the sync? That doesn't explain where your "magical locks-working-and not" barrier in FSB came from - it's either they are there or they aren't (dividers all the way).

"a) I was referring to finding the limit of the FSB (which you'd almost always want synchronous with RAM."

Where did the IDE come from then? Putting that aside...

Still not universally true. Maybe on Athlon XP systems, but on P4's you occasionally should use dividers to get the CPU clock higher. After all, the only platform having serious slowdown with asynch FSB/mem is AXP NF1/2.

"It has a memory bus and a hypertransport bus."

Correct. And so ?

"the hypertransport bus and will rarely result in any tangible performance"

Sure it does, since it is usually the only way to overclock the CPU as a whole.

"unless you're I/O limited to the video card or maybe a network or fiberchannel controller.."

You will never be - HTT is WAY faster even on defaults than any of these.

"There is no benefit, and all it can cause is problems. Big problems if this bus is on the verge of stability and gives out while performing a disk write."

Yes, but this doesn't change the fact that you can't avoid it on boards without (working) AGP/PCI locks. And neither that it isn't that strict - even the worst models can handle the 33->~40 bump easily if you need it elsewhere.

"Think about the signaling. If one bus is clocked at 33MHz and another at 68MHz and data is going from one to the other, the signal has to wait an extra clock than if the second bus were clocked at 66MHz. This is oversimplified, but makes the point. This is WHY on an NF2 (or ANY platform that supports asynchronous memory clocking), you get better performance with the memory bus in sync, even if slower. It isn't much of a problem on Intel platforms for the same reason that increasing memory timings doesn't hurt performance too bad on an Intel platform."

You didn't say anything about driving buses async in the first place. Of course that will cause a slowdown - but if you meant that and we were supposed to find that out by some magical means, this doesn't get it any closer to being "the key to interactivity" (which it has nothing to do with). Also, where does the "especially on AMD" hail from?

1) AXP/duron are seriously memory bandwith limited in all cases
2) A64, while not being memory bandwith limited (correct) doesn't have the asyncing problem which then falsifies your statement about that.

Burn-in...

Safer method? Like I said, "burn-in" will not do any good but HARM if anything. (and wastes time, of course). Strange view of "safe".

"people doing liquid nitrogen cooling usually stick to stock voltages, even for massive overclocks."

Hah ? They definately don't but instead bump the voltages to hell and beyond (because it's the way and you DO need it.). What's this foo again?

"That statement's validity really depends on what platform you're talking about. 1T vs 2T is a big performance modifies, but only applies to Athlon64."

Not true, applies to AXP too (and can be adjusted there too with modbioses on ie. 8rda, nf7-s and iirc a7n8x too).

"I have found that because of disk cacheing, if your RAM cacks out, your filesystem is usually farked. Totally and completely farked."

"Try hdparm with -T."

Stream synthetic benchmarks aren't much of an argument in real-world cases.

"Also, if you run some decent LINUX memory benchmarks, you'll find that RAM latency doesn't really do much except the two variables I've mentioned."

Namely? Definately not hdparm. And, memory access is such a low level business that what's good for you doesn't depend on if you run Windows 95 or Solaris - the story is the same, and if some "platform-specific" benchmarks give differentiative results, it's caused by the test in case and can't be generalized to the whole OS. The OS doesn't even handle stuff this low level, it's the memory controller (in CPU or NB), MMU (in CPU) and prefetch/prediction logic affecting what timings matter (and because of this it's os-independent but varies from platform to another).

"You can also time the application of some filter to a large image with the gimp for another decent non-synthetic benchmark."

Better, and if I do it I see differences with every timing option. How'd you explain that ?

"It's actually pretty well known that half dividers hurt your performance."

You mean the case of memory divider roundings... ? That's A64 specific, which you failed to mention (and not even bad in every case if that gives you the fastest CPU+mem speed combination, which isn't always achievable by standard dividers). On AXP/Intel .5 muls (and .25's on Intel too) do no harm in any case and give a good fine-tune.

"Using WINE? Also, I will tell you this. "

Of course not but native linux superpi and if at all possible pifast/prime on windows. About your reboots, I'm so sorry it happened, but, that still doesn't make gcc a good stress (prime definately stresses your cpu (and every bit of it) and memory better). Your case sounds more like random happening or insufficient PSU capacity (and HD bringing it to its limits).

" But what's more frustrating is when people say you're full of it just for trying to help others."

I didn't say that because you were "helping others", but because you failed to do that by supplying false/incomplete information.
_________________
IBM Thinkpad T42P - Gentoo Linux
Back to top
View user's profile Send private message
jmlxg
n00b
n00b


Joined: 14 Jul 2005
Posts: 61
Location: CT, USA

PostPosted: Thu Nov 03, 2005 5:51 pm    Post subject: Hi Reply with quote

Mwhahahahahahahahahahahahahahah!!!!!!!!!!!!! :lol:

I'm back oggialli. Bwhahahahahahah :D

Thoguht i must say you're right on the fact that newer AXP's are overlocked. 8O

I was wondering what you guys would think of overclocking an Athlon XP-M 1800 at 25W or 1.25Vcore cuz i was thinking of getting one of those. :D

I have a MSI K7D Master Dual Athlon MP mobo with a 1.82 bios which i am going to update in the near future. :lol:

Yes i will tell you that Linux is somewhat picky when it comes to overclocking but when you do overclock in Linux it is sure to work unlike in windows which my brothers want me to use. :D

Thanks,

jmlxg

P.S. > I like lauching evily just for that matter of that sake for you people wanting to ask. :D
_________________
I can not stand Microsoft Windows
Back to top
View user's profile Send private message
oggialli
Guru
Guru


Joined: 27 May 2004
Posts: 389
Location: Finland, near Tampere

PostPosted: Sun Nov 06, 2005 8:45 pm    Post subject: Reply with quote

Be my guest.
_________________
IBM Thinkpad T42P - Gentoo Linux
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, observing Dublin's docklands

PostPosted: Thu Nov 10, 2005 2:35 pm    Post subject: Reply with quote

*g* fun post to read really... just one thing, why would i want to o/c in the first place? i mean, if i want something stable and really rely on that, then usually i should be able to get enough funds to buy faster hardware instead of buying slower components that i have to overclock? i don't see the point in games either, not with cpu/ram at least. i'm still using an "old" athlon xp 1900+ with 1gb of ram @ 133/266 mhz and that does it for most games. the only thing that did make a difference to me was buying better graphics cards and *more* ram. I get to 25-30 fps in mid to max details everywhere, i can even *underclock* my cpu and ram and it hardly has any effect whatsoever unless i go below ~1.3ghz, so why bother? it's not gonna get significantly more fps, and even if it would my eye couldn't register those anyway so that'd be quite pointless. now, if i was an android with an eye that could process more than 30 fps or someone rigged my brain with faster video processing equipment, *then* i'd see a point in doing it :-P
and why exactly would i want to o/c for better interactivity? linux was always a lot better or at least as good in that as windows for me. fiddling around with kernel schedulers helps here and there but there's not much in interactivity or responsiveness changing with the clock speeds. the box at work that i'm usually supposed to work on is a p2 running on ~400mhz. right now i'm sitting on a p4 at 2.66ghz with hyperthreading, and the only difference in responsiveness (using about the same linux system and gnome instead of my usually preferred e17 or openbox) is that on the p4 some really long web pages render faster than on the p2. those extra seconds i can wait for, really. aside from that, the only other difference i notice is loading times (which is logical since the p2 has a slower hd). that's about another ~20 secs per hour on the box. so, again now that i have some people around that might sched some light on this, why exactly would i want to o/c?
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
Cintra
Advocate
Advocate


Joined: 03 Apr 2004
Posts: 2111
Location: Norway

PostPosted: Thu Nov 10, 2005 3:54 pm    Post subject: Reply with quote

I came across the first post in this thread a while back, and it helped shock me back to sanity ;-)
http://forums.sudhian.com/messageview.cfm?catid=38&threadid=21436
...only one snag, now I'm a Gentoo addict!
Mvh
_________________
"I am not bound to please thee with my answers" W.S.
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, observing Dublin's docklands

PostPosted: Thu Nov 10, 2005 5:41 pm    Post subject: Reply with quote

Cintra wrote:
I came across the first post in this thread a while back, and it helped shock me back to sanity ;-)
http://forums.sudhian.com/messageview.cfm?catid=38&threadid=21436
...only one snag, now I'm a Gentoo addict!
Mvh


LOL that post is *so* true. *thumbs up*
I'll bookmark that one for the next time someone tells me he's about to squeeze another 1.923% memory bandwidth out of his gf fx 8712xxl TDR abc^2 (yeah i know that's not an actual model)

still i'd be interested in the motivation behind it? i fully agree to that first post, i'd rather have a silent k6-2 333 for work and watching videos instead of an airplane turbine giong off next to me just to play domm3 at 90 fps and it bugged me to no end that i couldn't underclock my athlon xp without modification so could use less of a fan... come to think of it, i DO have a silent k6-2 that i got silent by putting a spare athlon-xp cooler on the thing and taking the fan off... works like a charm... now that's the type of modding i like... :)
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
djpenguin
Guru
Guru


Joined: 02 Sep 2004
Posts: 386

PostPosted: Mon Nov 14, 2005 3:06 am    Post subject: Reply with quote

oggialli wrote:
4) Yes, cool is fine, but esp. with P4 systems one really can't keep the real temperature of the die when under load anywhere near 45'C (with air). Don't worry until you reach ~70 celcius or even more.



Pardon me? I have a Northwood B 2.53GHz chip that runs at 33C with the fan set to 4.5V and basic desktop stuff going, and 25C if I turn the fan up to 12V. Under a long compiling load, temps rise to around 37-38C with the fan throttled up to around 6V. If the fan is set to 12V, the temps will be around 32C while compiling.

I use an Alpha aluminum/copper heatsink with an 80mm zalman fan on top. I don't have crazy amounts of case cooling either, just a pair of 80MM fans and the big 120mm one in the PSU.


Normally, I wouldn't nitpick like this, but damn, if you're gonna rip someone a new one over the supposed factual inconsistency of their post, do some fact-checking of your own before you make your allegations. I'm sure I'm not the only person in the world with an Alpha heatsink and a P4, and it's a given that some of the others have posted their temps on various overclocking/enthusiast forums. Incidentally, I have a video of an Athlon XP die cooking off in a puff of smoke when it hits ~90C if you'd like a concrete reason not to suggest that people run their commodity hardware at '70C or even more.'
Back to top
View user's profile Send private message
joaopft
Tux's lil' helper
Tux's lil' helper


Joined: 20 Oct 2003
Posts: 82
Location: Lisbon, Portugal

PostPosted: Mon Nov 21, 2005 1:46 pm    Post subject: Reply with quote

thebigslide wrote:
.

oggialli wrote:
6) Something like pifast/superpi/prime is a lot better measure of stability than compiling (and a lot quicker to notice instabilities too).
Using WINE? Also, I will tell you this. I have had systems spontaneously reboot when doing an emerge -e world that would run a synthetic processor benchmark for hours.



I can second that. My system (Athlon 64/ NF4 chipset) would run memtest/Prime for 24 hours straight with no errors, and then fail an emerge -e system. Part of the problem is that both memtest and prime are not 64 bit apps, so the memory subsystem will not get stressed enough. Also, there should be a lot of transistors dedicated to run the 64 bit instruction set that won't get tested with 16 or 32 bit apps.

An excelent (and quicker) test that works on A64 systems is (with -O3 optimization set up):

Code:
 emerge libquicktime


This particular emerge takes a long time in compiling the files cmodel_default.c and cmodel_yuv420p.c, which stresses the system a lot. Common problems with the NF3/NF4 chipsets result in a compile fail of either 'cmodel_default.c' or 'cmodel_yuv420p.c'. I have come across this on the gentoo forum, reading about problems of instability on an early NF3 mobo with stock settings. From experimenting a little bit, I've found that this compile is a good test for overclocked settings. Most faulty NF3/NF4 systems will fail here before anywhere else (including memtest and prime).
_________________
Fiction is obliged to stick to possibilities. Truth isn't.
Mark Twain
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum