Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
HOWTO:Download Cache for your LAN-Http-Replicator (ver 3.0)
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3 ... 13, 14, 15 ... 22, 23, 24  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
assaf
Apprentice
Apprentice


Joined: 14 Feb 2005
Posts: 152
Location: http://localhost

PostPosted: Tue May 10, 2005 8:07 pm    Post subject: Reply with quote

I don't know if this has been reported yet, but there seems to be a problem when trying to emerge fetch restricted packages (e.g. sun-jdk). The distfile must be placed in /usr/portage/distfiles for the emerge to continue, and http-replicator is completely out of the loop because portage doesn't even try to download the file...
Of course it's easy to workaround but...
Back to top
View user's profile Send private message
Gherald
Veteran
Veteran


Joined: 23 Aug 2004
Posts: 1399
Location: CLUAConsole

PostPosted: Wed May 11, 2005 12:20 am    Post subject: Reply with quote

Fetch restricted pakcages must be downloaded manually on each machine. This is expected behavior.
Back to top
View user's profile Send private message
assaf
Apprentice
Apprentice


Joined: 14 Feb 2005
Posts: 152
Location: http://localhost

PostPosted: Wed May 11, 2005 5:40 am    Post subject: Reply with quote

Gherald wrote:
Fetch restricted pakcages must be downloaded manually on each machine. This is expected behavior.


Of course they need to be downloaded manually, but you must emerge them right after downloading and before running repcacheman otherwise the file would be moved to /var/cache/... and won't be found by portage.

(Of course you could run http_proxy="" emerge ...)
Back to top
View user's profile Send private message
flybynite
l33t
l33t


Joined: 06 Dec 2002
Posts: 620

PostPosted: Thu May 12, 2005 5:17 pm    Post subject: Reply with quote

assaf wrote:
Wouldn't it make sense for http-replicator to stop downloading a file if there are no clients requesting it


I know what you mean. It is because of the http-replicator uses python's asyncore module. This allows background downloading without using threads. Http-replicator doesn't know when or if any client connects or disconnects because asyncore doesn't pass this info. Adding it might be possible, but lowlevel network ops are hard to debug and make bulletproof.

A custom asyncore module is being looked at, this may be possible in the future.
Back to top
View user's profile Send private message
flybynite
l33t
l33t


Joined: 06 Dec 2002
Posts: 620

PostPosted: Thu May 12, 2005 7:12 pm    Post subject: Reply with quote

Master One wrote:

I think this problem only can be solved, if http-replicator would use portage's FETCHCOMMAND defined in make.conf (because this is the only way, how the getdelta.sh script could be involved) instead of its own download routine.

Any chance for an implementation?



I've tried to use wget to download packages but it doesn't work. I can't get any general purpose download program to give feedback on errors and low level data control that will work with http-replicators needs.

It sounds to me like getdelta needs to have some network functions built in...

What might work for now is to have the getdelta.sh script fetch the files and then upload the files back to the http-replicator server for others?
Back to top
View user's profile Send private message
flybynite
l33t
l33t


Joined: 06 Dec 2002
Posts: 620

PostPosted: Thu May 12, 2005 8:28 pm    Post subject: Reply with quote

bigmacx wrote:


The way I read the original HOWTO, there was an emphasis on repeated running of repcacheman. This is mentioned as important to do on the server, but not mentioned concerning the clients.



repcacheman isn't installed on the clients and isn't needed at all on the clients. The only time it needs to run on the server is after an emerge on the server.

bigmacx wrote:


Now I understand the continued need to run repcacheman on the server is due to server emerges. After the server emerges, it is possible that not all files used during the emerge will be in the cache, but will be left in the dstfiles directory. So running repcacheman after a server based emerge moves these "finds" into the http-replicator cache.



True

bigmacx wrote:

It would seem that the client would use whatever out-of-band method to get these files just like the server did.


Not true.

There are several ways files can get to distifiles and not go through http-replicator.

1. FTP is the most obvious. http-replicator currently doesn't work at all with ftp mirrors. I've found some new packages only available on the authors ftp site at first, then eventually will be mirrored on an http mirror, probably within hours of release. Clients will check your http mirrors first so http-replicator can serve these files after repcacheman moves them to the cache.

2. Portage has a "Local Mirror" override option for some ebuilds with only FTP sources or RESTRICT=nomirror.

From the HOWTO:
Quote:
Also, some packages in portage have a RESTRICT="nomirror" option which will prevent portage from checking replicator for those packages. The following will override this behavior. Create the file "/etc/portage/mirrors" containing:
Code:


# Http-Replicator Override for FTP and RESTRICT="nomirror packages
local http://gentoo.osuosl.org

You can replace gento.osuosl.org with your favorite HTTP:// mirror. If you already have a local setting, don't worry, as long as it is an http mirror this will still be effective.




What I can't override is the RESTRICT=nofetch option. Things like sun's jdk. You have to go and click some license or something so portage will never download this package. There is no override in portage for this and it frustrates all admins of multiple boxen.

However if you run repcacheman, this package will be moved into the replicator's cache and can be downloaded from there. All replicator's cache can be browsed and downloaded from http://replicatorbox:8080/ or available elsewhere if you set the correct alias in the options.




In the SPECIAL setup your trying to run, repcacheman is necessary because portage leaves borked, incomplete and corrupt files in distfiles all the time. In your situation, repcacheman will checksum the files preventing junk from getting into the cache.



My only comment on your setup is that it isn't really a good idea to do mass unattended portage updates.

Unfortunately, some updates will break your system or other packages. The developers are getting much better, but it can and has happened to me.

What I do is update one box first, setting portage to automatically build binary packages. Then after a while, I update other boxes using the binaries I've created. This saves time and prevents alot of heat in my laptops! Since all my boxes aren't the same arch, some still just download the distfiles from http-replicator and do their own compiling.

I might update firefox to other boxes right after I launch the new version. In the case of GCC, glibc, or other system package I will wait days before upgrading all other boxes.
Back to top
View user's profile Send private message
bigmacx
n00b
n00b


Joined: 01 Nov 2004
Posts: 9

PostPosted: Fri May 13, 2005 6:11 pm    Post subject: Reply with quote

flybynite wrote:

repcacheman isn't installed on the clients and isn't needed at all on the clients. The only time it needs to run on the server is after an emerge on the server.
...
There are several ways files can get to distifiles and not go through http-replicator.

1. FTP is the most obvious. http-replicator currently doesn't work at all with ftp mirrors. I've found some new packages only available on the authors ftp site at first, then eventually will be mirrored on an http mirror, probably within hours of release. Clients will check your http mirrors first so http-replicator can serve these files after repcacheman moves them to the cache.

I know my post count is down on this forum, but I find it slightly amusing that everyone responding to my post dismisses the validity of the question and gives a trivial answer.

So then lets see:
1. http-replicator Clients do emerges just like http-replicator Servers ---------->CHECK

2. http-replicator Clients can install packages which are not simultaneously or previously installed on http-replicator Server ---------->CHECK

3. If http-replicator Client emerges a packages that uses non-http-replicator-Server methods to download the package source, the package source will be in the http-replicator Client's local dstfile directory ---------->CHECK

4. So then, this package source will never get to http-replicator Server's cache ---------->CHECK

5. Putting repcacheman and rsync on http-replicator Client solves this condition ---------->CHECK

flybynite wrote:
My only comment on your setup is that it isn't really a good idea to do mass unattended portage updates.

Unfortunately, some updates will break your system or other packages. The developers are getting much better, but it can and has happened to me.

I'm aware of this problem. There is also the issue of the *._cfgxxxxx" files and etc-update. Right now, I'm just trying to prototype this whole auto-emerge portion.

I have plans on trying to create a patching system similar to what we use at work with the Windows servers. We use vmware and snapshotting to duplicate our production environmet into a test environment, then patch and test the servers, and move the patched OS's back to production. This does not always work as cleanly as we would like due to handholding needed for some databases and custom software. But the big benefit is that we stopped "Patching and Praying" with our production servers a LONG time ago when we got burnt by WindowsUpdate.

Since we are trying to use more Linux at work, I wanted to see about getting a similar automatic update going for the production->test->production maneuver. Either I'll use vmware or Xen, but I would like to try the snapshotting offered by LVM.

I hope the above helps explain what I'm asking because if someone tells me one-more-time that "repcacheman does not need to be ran on the clients," I'm gonna cyberscream!!!!:P
_________________
Gentoo forever baby!
Back to top
View user's profile Send private message
killercow
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jan 2004
Posts: 86
Location: Netherlands

PostPosted: Tue May 24, 2005 11:54 am    Post subject: Reply with quote

Okay,

Here's my post again, i know i could read the entire thread again, and see what had been achieved by now, but could the following questions/functions be performed by a next version of this? (or should this be a seperate program)

Im wondering if this program could be used to do the following:

Create a single point for portage to store all of the synced files. Probably mounted trough NFS/SAMBA, or queried by a special portage clone.
Create a single server/machine who would handle compiling programms (possible helped by other machines trough distcc and ccache)

Create a way to handle different arch and make options on one server.

This would be the sweetest option available for things like clusters/schools/internet cafe's.

Since most of these situations can't have:
their nodes compile things for them selves,
And have their nodes configured in a couple of different ways.
And Ususally have a couple of nodes per config/arch set.
And only need one portage cache.

This would imply that any machine in the lan could just do an emerge or emerge world -up, and it would go query the shared portage tree for its packages depening on tis own config set.
It would then ask the http_proxy thingy for the binay packages for its arch and make config.
The proxy would then either give that package, so it can be installed, Or it could start compiling it (maybe with the help of other machines (including the requestor) trought distcc)
Since this would also imply that some packages will be build with different use/make flags but would not be different (eg the make fags don't apply) it would be handled the simpest by using a large +2GB? ccache.

Any other thoughts on this?

I would use it for my cluster consisting of 5 servers all configured the same., 1 other server, which only has two other make flags.
And one completely different server, (different arch, and setup, because its the fileserver)
This would help my cluster to get managed the way a setup like that should, (just call the emerge world -up command on every machine, and im set,)
And if i need a special package on ony of the nodes, it could just emerge it, and it would get recorded in that node's world file.
Back to top
View user's profile Send private message
assaf
Apprentice
Apprentice


Joined: 14 Feb 2005
Posts: 152
Location: http://localhost

PostPosted: Tue May 24, 2005 12:07 pm    Post subject: Reply with quote

LOL! An internet cafe running gentoos...
Back to top
View user's profile Send private message
flybynite
l33t
l33t


Joined: 06 Dec 2002
Posts: 620

PostPosted: Tue May 24, 2005 11:04 pm    Post subject: Reply with quote

killercow wrote:

Im wondering if this program could be used to do the following:


No. In fact , I would say that no one program is ever going to do all these things. Everything you ask can be done today but will require you to do some coding/scripting.......

My best guesses:

If the nodes can't compile anything than they probably should be thin clients and just netboot an image off the server. Gentoo can be used for the server and building the different images. Probably best for internet cafe's...

Most nodes that can do _some_ limited compiling can use distcc. Distcc can be configured so no compiling is done on the localhost (some has to be done on localhost, but not much). I do the custom packages on my laptops this way. The common packages are binaries served through http-replicator.

Did you know that different arch's can still use Distcc:
using amd 64 for 32 bit distcc compiles

The same chroot solution can compile on remote hosts without distcc:

Compiling by Proxy: Rsync Edition

So are you willing to try and do this or are you just looking to "emerge" a total solution?
Back to top
View user's profile Send private message
killercow
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jan 2004
Posts: 86
Location: Netherlands

PostPosted: Wed May 25, 2005 9:42 am    Post subject: Reply with quote

flybynite wrote:
killercow wrote:

Im wondering if this program could be used to do the following:


...snap..

So are you willing to try and do this or are you just looking to "emerge" a total solution?


Im willing to help develop it.
My questions are not supposed to yield a emerge-able sollution as i know there is no such thing. They are meant to stir up discussion about progams like yours, and what their future could be.

I think i descibed a few scenarios in which it would be good to have a sollution as i discribed.

Internet cafe's, schools, offices, cluster nodes could all make use of one program to handle the following things:
Emerge new programs, and Emerge updates (like any normal gentoo setup)

But with the following distinct advantages;
Nodes get their packages from a master "emerge" server, which hands out binaries based on the requesting arch,make,use flags.
Nodes could be participating in the actual compiling if the master emerge server asks them to trough distcc (speeding up the building of packages)
Nodes could install own packages, which comes in handy in offices (since not every system should be the same, and thus the system remains flexible (unline net-boots))
Each node can run different hardware with optimized software, (unlike net-boots)
One server handles all outgoing http traffic (no more overhead from different systems trying to sync/download package, this is were http-replicator comes in)
Nodes could still issue a simple "net-emerge world -up" to update.
Nodes could still issue a simple "net-emerge package*** " to install a package.

The question is, is this a view i share with others, or am i on my own, and do you think gentoo is not meant to be run on larger mixed lans
As i reccon it makes administrating a lot of homo/heterogenous lans a lot simpler, and faster since they all run gentoo.

Here's a mockup:
http://www.innerheight.com/portage.pdf


Last edited by killercow on Wed May 25, 2005 10:11 am; edited 1 time in total
Back to top
View user's profile Send private message
killercow
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jan 2004
Posts: 86
Location: Netherlands

PostPosted: Wed May 25, 2005 9:46 am    Post subject: Reply with quote

assaf wrote:
LOL! An internet cafe running gentoos...


And why is this not doable?
It would give you a secure platform, with exactly the apps you'd like.

Many internet cafe systems in use today use expensive kiosk software wrapping around IE, on top of a OS which i not really needed for anything but still needs dozens of updates are care to keep working.
Back to top
View user's profile Send private message
assaf
Apprentice
Apprentice


Joined: 14 Feb 2005
Posts: 152
Location: http://localhost

PostPosted: Wed May 25, 2005 10:37 am    Post subject: Reply with quote

killercow wrote:
assaf wrote:
LOL! An internet cafe running gentoos...


And why is this not doable?
It would give you a secure platform, with exactly the apps you'd like.

Many internet cafe systems in use today use expensive kiosk software wrapping around IE, on top of a OS which i not really needed for anything but still needs dozens of updates are care to keep working.


Exactly. It's funny to think about an internet cafe sysadmin that heard about gentoo, let alone skilled enough to maintain it. It's more likely that the internet cafe owner will hire the cheapest sysadmin, and that sysadmin will install redhat or windows...
Back to top
View user's profile Send private message
killercow
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jan 2004
Posts: 86
Location: Netherlands

PostPosted: Thu May 26, 2005 12:38 pm    Post subject: Reply with quote

assaf wrote:
killercow wrote:
assaf wrote:
LOL! An internet cafe running gentoos...


And why is this not doable?
It would give you a secure platform, with exactly the apps you'd like.

Many internet cafe systems in use today use expensive kiosk software wrapping around IE, on top of a OS which i not really needed for anything but still needs dozens of updates are care to keep working.


Exactly. It's funny to think about an internet cafe sysadmin that heard about gentoo, let alone skilled enough to maintain it. It's more likely that the internet cafe owner will hire the cheapest sysadmin, and that sysadmin will install redhat or windows...


10 years ago, people said the same thing about internet cafe's in general.
Cafe;s with internet access LOL! (why would anyone sit behind a computer in a cafe?)

I think gentoo could be run fine in Internet cafe's if it had a system like i discribed above. And thus i think we should make effort to create that system.
Back to top
View user's profile Send private message
NightMonkey
Guru
Guru


Joined: 21 Mar 2003
Posts: 328
Location: Pittsburgh, PA

PostPosted: Sat Jun 04, 2005 8:43 pm    Post subject: Reply with quote

assaf wrote:
killercow wrote:
assaf wrote:
LOL! An internet cafe running gentoos...


And why is this not doable?
It would give you a secure platform, with exactly the apps you'd like.

Many internet cafe systems in use today use expensive kiosk software wrapping around IE, on top of a OS which i not really needed for anything but still needs dozens of updates are care to keep working.


Exactly. It's funny to think about an internet cafe sysadmin that heard about gentoo, let alone skilled enough to maintain it. It's more likely that the internet cafe owner will hire the cheapest sysadmin, and that sysadmin will install redhat or windows...


I think you are slagging a big group of people with too broad a brush. And not everyone can start their SysAdmin career working for IBM, Sun or NASA. I was in Budapest in 2001, and what I saw "Internet Cafe" admins doing there was quite advanced, and really squeezing out the Mhz from older hardware, too. In Amsterdam, there were even spiffier options at some of the "mega cafes" there. And in my home town, many "Internet Cafe" admins are helping to make my whole city a free wireless hotspot.

Anyhow, Open Source, and high tech in general, grows not by people slagging the projects of others, but by incorporating great ideas from all projects, no matter how "mighty" or "small". Please think twice before belittiling others work or career choices, especially on a public forum.

And no, I don't admin a cafe network... ;)
Back to top
View user's profile Send private message
assaf
Apprentice
Apprentice


Joined: 14 Feb 2005
Posts: 152
Location: http://localhost

PostPosted: Sun Jun 05, 2005 7:25 am    Post subject: Reply with quote

NightMonkey wrote:
assaf wrote:
killercow wrote:
assaf wrote:
LOL! An internet cafe running gentoos...


And why is this not doable?
It would give you a secure platform, with exactly the apps you'd like.

Many internet cafe systems in use today use expensive kiosk software wrapping around IE, on top of a OS which i not really needed for anything but still needs dozens of updates are care to keep working.


Exactly. It's funny to think about an internet cafe sysadmin that heard about gentoo, let alone skilled enough to maintain it. It's more likely that the internet cafe owner will hire the cheapest sysadmin, and that sysadmin will install redhat or windows...


I think you are slagging a big group of people with too broad a brush. And not everyone can start their SysAdmin career working for IBM, Sun or NASA. I was in Budapest in 2001, and what I saw "Internet Cafe" admins doing there was quite advanced, and really squeezing out the Mhz from older hardware, too. In Amsterdam, there were even spiffier options at some of the "mega cafes" there. And in my home town, many "Internet Cafe" admins are helping to make my whole city a free wireless hotspot.

Anyhow, Open Source, and high tech in general, grows not by people slagging the projects of others, but by incorporating great ideas from all projects, no matter how "mighty" or "small". Please think twice before belittiling others work or career choices, especially on a public forum.

And no, I don't admin a cafe network... ;)



Hey, j/k ;)
Back to top
View user's profile Send private message
killercow
Tux's lil' helper
Tux's lil' helper


Joined: 29 Jan 2004
Posts: 86
Location: Netherlands

PostPosted: Mon Jun 06, 2005 9:05 am    Post subject: Reply with quote

So,

Back on topic (not really The topic, but my topic)

What would be needed to create a system like mine? and who would be interested in it?

I'd personally love it for my clusters, and maybe even for my collection of home pc's.
Offcource i could share processing power and build time with my friends who might not have the same horse power at home as i do. (since my machine is on 24/7 i could allow my friend to compile on my machine trough this system at night)
Back to top
View user's profile Send private message
flybynite
l33t
l33t


Joined: 06 Dec 2002
Posts: 620

PostPosted: Thu Jun 09, 2005 8:14 pm    Post subject: Now an official package!! Reply with quote

Many thanks to Maurice van der Pot, who on 02 Jun 2005, added http-replicator as an offical Gentoo package!!

See http://packages.gentoo.org/ebuilds/?http-replicator-3.0

Current users wishing to switch to the official ebuild should check to howto at the start of this thread. It has been updated to reflect the official ebuild.

The changes in the official version are minimal and will not affect your current http-replicator setup.
Back to top
View user's profile Send private message
NightMonkey
Guru
Guru


Joined: 21 Mar 2003
Posts: 328
Location: Pittsburgh, PA

PostPosted: Thu Jun 09, 2005 9:06 pm    Post subject: Re: Now an official package!! Reply with quote

flybynite wrote:
Many thanks to Maurice van der Pot, who on 02 Jun 2005, added http-replicator as an offical Gentoo package!!

See http://packages.gentoo.org/ebuilds/?http-replicator-3.0

Current users wishing to switch to the official ebuild should check to howto at the start of this thread. It has been updated to reflect the official ebuild.

The changes in the official version are minimal and will not affect your current http-replicator setup.


Congrats, flybynite. I've been using your code for over a year now, and it just rocks, and keeps the good Gentoo mirror admins from blocking my NAT'd IP with several boxes behind it. :) Thank you!
Back to top
View user's profile Send private message
javac16
Tux's lil' helper
Tux's lil' helper


Joined: 10 Aug 2003
Posts: 111

PostPosted: Wed Jun 15, 2005 12:31 pm    Post subject: Reply with quote

I want to pass on my congrats as well. I have been using http-replicator for a long time and really like it. I have just upgraded to the portage version.
Back to top
View user's profile Send private message
javac16
Tux's lil' helper
Tux's lil' helper


Joined: 10 Aug 2003
Posts: 111

PostPosted: Thu Jun 23, 2005 3:42 am    Post subject: Reply with quote

I have recently begun having some issues on my http-replicator server box. My client machines can connect to it no problem, but the box itself doesn't seem to be able to download anything it just hangs. I have changed mirrors a number of times, with no luck and have restarted http-replicator.

Running a netstat -at shows that the connection to the mirror site is successful but it just doesn't seem to download. I can successfully download when I do not go through the http-replicator proxy. Any ideas where I should look?

Code:

22 Jun 2005 18:49:13 STAT: HttpServer 41 bound to gentoo.chem.wisc.edu
22 Jun 2005 19:04:13 STAT: HttpClient 42 bound to 192.168.2.166
22 Jun 2005 19:04:13 INFO: HttpClient 42 proxy request for http://gentoo.chem.wisc.edu/gentoo/distfiles/netkit-ftp-0.17.tar.gz
22 Jun 2005 19:04:14 STAT: HttpServer 42 bound to gentoo.chem.wisc.edu
22 Jun 2005 19:19:15 STAT: HttpClient 43 bound to 192.168.2.166
22 Jun 2005 19:19:15 INFO: HttpClient 43 proxy request for http://gentoo.chem.wisc.edu/gentoo/distfiles/netkit-ftp-0.17.tar.gz
22 Jun 2005 19:19:15 STAT: HttpServer 43 bound to gentoo.chem.wisc.edu
22 Jun 2005 19:34:18 STAT: HttpClient 44 bound to 192.168.2.166
22 Jun 2005 19:34:18 INFO: HttpClient 44 proxy request for http://gentoo.chem.wisc.edu/gentoo/distfiles/netkit-ftp-0.17.tar.gz
22 Jun 2005 19:34:18 STAT: HttpServer 44 bound to gentoo.chem.wisc.edu
22 Jun 2005 19:49:22 STAT: HttpClient 45 bound to 192.168.2.166
22 Jun 2005 19:49:22 INFO: HttpClient 45 proxy request for http://gentoo.chem.wisc.edu/gentoo/distfiles/netkit-ftp-0.17.tar.gz
22 Jun 2005 19:49:22 STAT: HttpServer 45 bound to gentoo.chem.wisc.edu
22 Jun 2005 21:00:20 STAT: HttpClient 46 bound to 192.168.2.166
22 Jun 2005 21:00:20 INFO: HttpClient 46 proxy request for http://gentoo.chem.wisc.edu/gentoo/distfiles/netkit-ftp-0.17.tar.gz
Back to top
View user's profile Send private message
Quincy
Apprentice
Apprentice


Joined: 02 Jun 2005
Posts: 201
Location: Germany

PostPosted: Sun Jul 03, 2005 5:01 pm    Post subject: Reply with quote

Is it a bug or a feature.....

I have set up the portage version of http-replicator and it works fine. Saves me a lot of time and bandwidth or copying files around by hand.

But i think there is a problem with resuming downloads. Is it impossible for http-replicator to see a partly downloaded file and start resuming it? I'm asking because i was downloading the new kde version via ISDN and my connection got interrupted after 10MB of a package. I reconnected, and wget showed correctly: Partial contect...resuming, but then it gets stuck in this position without doing anything obvious. My Internet connection is fully loaded by http-replicator loading some stuff.

Has it started the file over again? Is there a solution for this "problem"?
Back to top
View user's profile Send private message
flybynite
l33t
l33t


Joined: 06 Dec 2002
Posts: 620

PostPosted: Sun Jul 03, 2005 9:02 pm    Post subject: Reply with quote

Quincy wrote:

Is it impossible for http-replicator to see a partly downloaded file and start resuming it? I'm asking because i was downloading the new kde version via ISDN and my connection got interrupted after 10MB of a package.



http-replicator only supports resuming on the client side. If the complete file is in the cache, clients can resume downloads.


If http-replicator doesn't have the complete file in the cache, how would it know it? How does apache know if the web pages it serves are broken? The answer is they don't know. They can only serve what the admin has provided. If incomplete files get into replicators cache, replicator will still serve them.

If http-replicator knows the file is incomplete it will delete it from the cache. This happens if you start a long download and then restart or shutdown http-replicator.


If you really want to resume your download, temp remove the http_proxy from your /etc/make.conf and let portage resume it from one of the mirrors. Not all mirrors support resuming so you may still lose what you have. Then follow below to put the files back into replicator.

I wrote the repcacheman script to be able to verify downloads are complete. In the current verson you can move all the files from replicators cache to the distfile dir and then run repcacheman. It will checksum the files, and move the good files back to the cache dir.

This should do it and delete the junk left over if you use the default dir's.
Code:

mv /var/cache/http-replicator/* /usr/portage/distfiles
repcacheman
rm /usr/portage/distfiles/*



In the future:

1. http-replicator may support resuming it's downloads.
2. repcacheman may automatically checksum the cache whenever it is run
3. portage may be smart enough to work with replicator?????
Back to top
View user's profile Send private message
flybynite
l33t
l33t


Joined: 06 Dec 2002
Posts: 620

PostPosted: Sun Jul 03, 2005 9:07 pm    Post subject: Reply with quote

javac16 wrote:
I have recently begun having some issues on my http-replicator server box. My client machines can connect to it no problem, but the box itself doesn't seem to be able to download anything it just hangs.


Have you been able to fix this yet?
Back to top
View user's profile Send private message
Quincy
Apprentice
Apprentice


Joined: 02 Jun 2005
Posts: 201
Location: Germany

PostPosted: Sun Jul 03, 2005 10:17 pm    Post subject: Reply with quote

flybynite wrote:

In the future:

1. http-replicator may support resuming it's downloads.
2. repcacheman may automatically checksum the cache whenever it is run
3. portage may be smart enough to work with replicator?????


I hope it will be last one....downloading for every single machine is annoying and needless.
So i'm glad about http-replicator :D
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page Previous  1, 2, 3 ... 13, 14, 15 ... 22, 23, 24  Next
Page 14 of 24

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum