Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
replacing disk on raid1 [solved]
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Mon Apr 03, 2017 9:14 pm    Post subject: replacing disk on raid1 [solved] Reply with quote

As one 1 TB disk failed, I replaced it and formatted the new disk with a prmary partition, but I cannot add it to the array because it claims that the new sdc1 partition is not big enough. fdisk complains that on the first disk containing the data and beeing member of the raid1 array Partition 1 (sdb1) does not start on physical sector boundary.

But when formatting the new disk the sdc1 partition obviously does not fit. Following the fdisk informations.

Code:
fdisk /dev/sdb

Willkommen bei fdisk (util-linux 2.28.2).
Änderungen werden vorerst nur im Speicher vorgenommen, bis Sie sich
entscheiden, sie zu schreiben.
Seien Sie vorsichtig, bevor Sie den Schreibbefehl anwenden.


Befehl (m für Hilfe): p
Festplatte /dev/sdb: 931.5 GiB, 1000204886016 Bytes, 1953525168 Sektoren
Einheiten: Sektoren von 1 * 512 = 512 Bytes
Sektorgröße (logisch/physikalisch): 512 Bytes / 4096 Bytes
E/A-Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Festplattenbezeichnungstyp: dos
Festplattenbezeichner: 0x00000000

Gerät      Boot Anfang       Ende   Sektoren  Größe Kn Typ
/dev/sdb1  *        63 1953525167 1953525105 931.5G fd Linux raid autodetect

Partition 1 beginnt nicht an einer physikalischen Sektorgrenze.

Befehl (m für Hilfe): q
# fdisk /dev/sdc

Willkommen bei fdisk (util-linux 2.28.2).
Änderungen werden vorerst nur im Speicher vorgenommen, bis Sie sich
entscheiden, sie zu schreiben.
Seien Sie vorsichtig, bevor Sie den Schreibbefehl anwenden.


Befehl (m für Hilfe): p
Festplatte /dev/sdc: 931.5 GiB, 1000204886016 Bytes, 1953525168 Sektoren
Einheiten: Sektoren von 1 * 512 = 512 Bytes
Sektorgröße (logisch/physikalisch): 512 Bytes / 4096 Bytes
E/A-Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Festplattenbezeichnungstyp: dos
Festplattenbezeichner: 0xca10af70

Befehl (m für Hilfe): p
Festplatte /dev/sdc: 931.5 GiB, 1000204886016 Bytes, 1953525168 Sektoren
Einheiten: Sektoren von 1 * 512 = 512 Bytes
Sektorgröße (logisch/physikalisch): 512 Bytes / 4096 Bytes
E/A-Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Festplattenbezeichnungstyp: dos
Festplattenbezeichner: 0xca10af70


For now I deleted the partition sdc1 and tried to add /dev/sdc and it is working / syncing.

But how can I solve this issue of new partition not big enough? How can I create a identical partition sdc1 that has the same size as sdb1? Or is there a bether aproach?


Last edited by Elleni on Thu Apr 06, 2017 5:04 pm; edited 1 time in total
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44020
Location: 56N 3W

PostPosted: Mon Apr 03, 2017 11:22 pm    Post subject: Reply with quote

Elleni,

Code:
E/A-Größe (minimal/optimal): 4096 Bytes / 4096 Bytes
Festplattenbezeichnungstyp: dos
Festplattenbezeichner: 0x00000000

Gerät      Boot Anfang       Ende   Sektoren  Größe Kn Typ
/dev/sdb1  *        63 1953525167 1953525105 931.5G fd Linux raid autodetect

Partition 1 beginnt nicht an einer physikalischen Sektorgrenze.

I don't read German but I'm famillar with fdisk.


E/A-Größe (minimal/optimal): 4096 Bytes / 4096 Bytes tells that you have a hard drive with 4kB physical sectors.
Partition 1 starts at LBA 63. That block 63 in 512B sectors. One physical sector on your drive is 8 logical sectors.
Ideally your partition should start at a LBA that is an exact multiple of 8, so it starts on a disk sector boundary.
Unfortunately, it out by one, so you have misaligned partitions and the access speed penalty that goes with it.

When the raid was new, fdisk defaulted to a partition start sector of 63.
Newer fdisk versions default to 2048, hence your partition size issue.
Its just the default that's been changed.

Provided nte sync works, the partition table will be copied over too, as you are syncing the whole drive.
That still leaves the access speed penalty due to the misalined partitions.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Tue Apr 04, 2017 12:04 am    Post subject: Reply with quote

Thank you for the explanation. Now is there a way to convert to properly aligned partitions without losing the content of this raid array? For ex. disabling raid temporary and rearrange partitions with gparted ? And as this drives comtain mounted userhome, if possible at all - can it be done on the livesystem or do I have to boot from a livemedium?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44020
Location: 56N 3W

PostPosted: Tue Apr 04, 2017 9:03 am    Post subject: Reply with quote

Elleni,

You are on the right track. Its always good to have a backup. Its safer and simpler if you do the copy step below from a a separate boot disk.
Using mdadm, fail one drive in the raid set and remove it.
Repartition this drive so that the partitions are aligned. fdisk will get this right by default now, which was the cause of your original problem.
Do not make a filesystem on this drive.
Donate this drive to a new raid set that has one drive missing. Make a filesystem on this new raid set.
If the raid is only /home its metadata version won't matter but if it holds /boot, take care to use the same raid metadata version as the old raid.

Reboot with a USB stick.

Mount the old raid read only somewhere
Mount the new raid read/write.

Copy the data from one to the other. The read only mount is te protect you from yourself.
chroot into the new raid and install the boot loader.
Add a file to the top level dir touch new_raid so you can tell them apart.
Fix the /etc/fstab on the new raid.

Now you have two degraded raid sets that differ only in the new_raid file.

Reboot to test. Check that you can see the new_raid file.
If not, the old raid is mounted. Adjust your setup until you can mount the new raid at boot.
You may need to edit the old /etc/fstab. Exactly what you need to do depends on your install.

Once all is well, destroy the old raid set and repartition it to match the new one.
Add the partition to the new raid set and the raid set will resync.

At this point, you will have migrated from a misaligned raid set to an aligned raid set.
Its like replacing a failed drive but with the extra copy operation to move the data from the misaligned filesystem to the aligned one.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Wed Apr 05, 2017 5:43 pm    Post subject: Reply with quote

Hello Neddy,

how can I correct this?
Code:
md: invalid raid superblock magic on sdb1
md: sdb1 does not have a valid v0.90 superblock, not importing!
md: invalid raid superblock magic on sdc1
md: sdc1 does not have a valid v0.90 superblock, not importing!
md: autorun ...
md: ... autorun DONE.
md/raid1:md127: active with 1 out of 2 mirrors
md127: detected capacity change from 0 to 1000070643712
md/raid1:md0: active with 1 out of 2 mirrors
md0: detected capacity change from 0 to 1000070578176
EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)

When I stop the read-only autocreated md127 I can create md1 with
Code:
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 missing
I can mount md1 successfully, but after reboot it is not mounted and system creates md127 again.

As this mirrored raid only contain userhomes and nothing else, I thought, I will not need to have a v0.9 superblock?

Thanks again for your help!
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44020
Location: 56N 3W

PostPosted: Wed Apr 05, 2017 6:47 pm    Post subject: Reply with quote

Elleni,

I suspect that you have kernel raid auto assembly on and are using raid 1.2 superblocks.

Try something like
Code:
$ sudo mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 0.90.00   <----
           UUID : 9392926d:64086e7a:86638283:4138a597
  Creation Time : Sat Apr 11 16:34:40 2009
     Raid Level : raid1
  Used Dev Size : 40064 (39.13 MiB 41.03 MB)
     Array Size : 40064 (39.13 MiB 41.03 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 125

    Update Time : Tue Jan 24 15:25:57 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : acb40f9 - correct
         Events : 34
Thats my /boot

I live with the auto assigned numbers but I think you can set them in mdadm.config
See
Code:
man mdadm.config
.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Thu Apr 06, 2017 5:14 am    Post subject: Reply with quote

No matter what I do, it always autoassebles read-only raid127 after reboot. I tried to add it in mdadm.conf, but still after reboot md127 is there. I than have to stop md127, delete partition via fdisk and recreate it and then stop automatically created md1. Now I am able to manually create my md1 and use it to copy data on it, but it is not persistent until I reboot. Is this an udev thing or a thing of autoassembly enabled in kernel? And how am I supposed to stop this?

I also have done
Code:

mdadm --detail --scan >> /etc/mdadm.conf

in order to get a line for md1 in mdadm.conf

I have tried:
Code:
cat /proc/mdstat         
Personalities : [raid1]
md127 : active (auto-read-only) raid1 sdb[0]
      976631488 blocks super 1.2 [2/1] [U_]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md0 : active raid1 sdc1[0]
      976631424 blocks super 1.2 [2/1] [U_]
      bitmap: 5/8 pages [20KB], 65536KB chunk

unused devices: <none>


cfdisk /dev/sdb

-> deleted partition sdb and recreated it

Now md1 is automatically created somehow but not with partition sdb1 but with disk sdb:


Code:

# cat /proc/mdstat         
Personalities : [raid1]
md1 : active raid1 sdb[0]
      976631488 blocks super 1.2 [2/1] [U_]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md0 : active raid1 sdc1[0]
      976631424 blocks super 1.2 [2/1] [U_]
      bitmap: 5/8 pages [20KB], 65536KB chunk

unused devices: <none>


So I stop md1 and recreate it as needed with partition sdb1:
Code:

mdadm --stop /dev/md1
mdadm: stopped /dev/md1
# mdadm --create /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 missing
mdadm: /dev/sdb1 appears to be part of a raid array:
       level=raid1 devices=2 ctime=Thu Apr  6 06:51:13 2017
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.


Now I can format md1 with ext4, mount it and copy my data on md1 but after reboot the story begins again with automatically created md127 read-only raid...
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44020
Location: 56N 3W

PostPosted: Thu Apr 06, 2017 8:38 am    Post subject: Reply with quote

Elleni,

Code:
cat /proc/mdstat         
Personalities : [raid1]
md127 : active (auto-read-only) raid1 sdb[0]
      976631488 blocks super 1.2 [2/1] [U_]
      bitmap: 0/8 pages [0KB], 65536KB chunk

This says super 1.2, so kernel raid auto detect will not work. It only looks for super 0.9 raid arrays.

Do you have an initrd?
If so, does it contain mdadm?
That's the mdadm.conf you need to edit as mdadm.conf on your root will not be used. The raid is already assembled before root is mounted.
If you are not sure, pastebin your dmesg.

We are going to have some terminology issues around create and assemble raids.
--create is done only once in the life of a raid set. It writes the raid metadata to the elements of the raid set.
--assemble is performed every boot. It examines existing metadata and starts existing raid sets with the correct members.
There is no need to keep using --create. You should use --stop followed by --assemble.
Deleting the partition (partition table) does nothing. Its a pointer to your data.
When you remove the pointer, the data remains. Its not possible to access the data until you recreate the pointer (partition table).

My initrd init script contains
Code:
 # /  (root)  I wimped out of root on lvm for this box
/sbin/mdadm --assemble /dev/md126 --uuid=5e3cadd4-cfd2-665d-9690-1ac76d8f5a5d

The --uuid is the UUID of the raid set from mdadm -E, not the UUID of the filesystem it carries.

Code:
# cat /proc/mdstat         
Personalities : [raid1]
md1 : active raid1 sdb[0]
indicates that you have made the raid set on the entire drive, not on sdb1.
That might be a typo in the mdadm --create command.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Thu Apr 06, 2017 10:49 am    Post subject: Reply with quote

Hello NeddySeagoon, and thanks for your Patience with me :)

No, I do not have an initrd, as I did not need it until now.

And the md1 was once made with sdb (whole disk) but I had deleted it, and now I want to make it with sdb1. But as soon as I stop the readonly md127 array, System creates md1 with whole disk sdb automatically. Thats why I have to stop md1 too. then when trying to create / or assemble new md1 with sdb1 it says sdb1 not found or busy so I deleted Partition within cfdisk and did recreate the sdb1 with partition type fd and then I was able to create md1.

I understand about the mdadm UUID Thing, and thats what I also had put in mdadm.conf. Nevertheless, I am not able to Mount md1 after a reboot. It might be as you say, that the md1 raid is already assembled before root is mounted, I dont know. I will post dmesg output in the evening, I dont have Access to the box right now.

Thanks again for help! :)
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44020
Location: 56N 3W

PostPosted: Thu Apr 06, 2017 11:51 am    Post subject: Reply with quote

Elleni,

With no initrd and raid superblock version 1.2, the raid is not auto assembled.
However, removing the partition table does not remove anything except the partition table.

With raid superblock version 1.2, the raid metadata is written at the start of the partition or drive, if you use the entire drive.
For version 0.9, its at the end.

mdadm should prevent you from having both raid superblock versions at the same time but it sounds like it may not.
You can use mdadm to remove the raid superblock information, make it again - there should be no warnings about the volume having existing raid superblocks.
Them make the filesystem and copy the data over.

mdadm also allows you to edit the preferred minor device number, shown by mdadm -E.
This is supposed to be used unless its overridden when the raid is assembled.
You do that in mdadm.conf,
Code:
ARRAY /dev/md1 super-minor=1
, if the preferred minor is 1 in the raid superblock.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Thu Apr 06, 2017 4:26 pm    Post subject: Reply with quote

Hi NeddySeagoon,

I stopped md1 with:
Code:
mdadm --stop /dev/md1

removed superblock with:
Code:

mdadm --zero-superblock /dev/sdb1

recreated md1 with:
Code:

mdadm --create /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 missing
mdadm --assemble --scan

I added line to mdadm.conf with:
Code:

mdadm --detail --scan >> /etc/mdadm.conf

Then I mounted the raid, and everything looked fine (as it always does), but this time I also had removed superblock, so I was hoping that now its persistent after reboot. But surprise: After reboot:
Code:

cat /proc/mdstat
Personalities : [raid1]
md127 : active (auto-read-only) raid1 sdb[0]
      976631488 blocks super 1.2 [2/1] [U_]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md0 : active raid1 sdc1[0]
      976631424 blocks super 1.2 [2/1] [U_]
      bitmap: 5/8 pages [20KB], 65536KB chunk

unused devices: <none>


Following my dmesg:
https://paste.pound-python.org/show/TkEvEkdArkFHZdvnHBDM/

mdadm -E /dev/sdc1
Code:
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 10cd45c9:810eb9f6:a418ae2c:213d6644
           Name : gentoo1:0  (local to host gentoo1)
  Creation Time : Sat Aug 23 13:16:23 2014
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953262961 (931.39 GiB 1000.07 GB)
     Array Size : 976631424 (931.39 GiB 1000.07 GB)
  Used Dev Size : 1953262848 (931.39 GiB 1000.07 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=113 sectors
          State : clean
    Device UUID : 6cca87af:982cd843:5562f793:79435d67

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Apr  6 18:27:10 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : d175f63 - correct
         Events : 110013


   Device Role : Active device 0
   Array State : A. ('A' == active, '.' == missing, 'R' == replacing)


mdadm -E /dev/sdb
Code:
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 2a03d9a9:e30e7c2c:0c477702:fc4f8e26
           Name : gentoo1:1  (local to host gentoo1)
  Creation Time : Wed Apr  5 18:20:36 2017
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
     Array Size : 976631488 (931.39 GiB 1000.07 GB)
  Used Dev Size : 1953262976 (931.39 GiB 1000.07 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=48 sectors
          State : active
    Device UUID : 62433233:a0b926d7:c5e62180:87385ac4

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Apr  5 18:20:36 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 40a8f822 - correct
         Events : 0


   Device Role : Active device 0
   Array State : A. ('A' == active, '.' == missing, 'R' == replacing)


:?:


Last edited by Elleni on Thu Apr 06, 2017 4:33 pm; edited 1 time in total
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2971
Location: Germany

PostPosted: Thu Apr 06, 2017 4:32 pm    Post subject: Reply with quote

The --detail --scan is just a starting point. It should not be used as mdadm.conf literally.

In most cases you should be fine by just using the UUID.

So your mdadm.conf should look like this:

Code:

MAILADDR your@address

ARRAY /dev/md0 UUID=d8b8b4e5:e47b2e45:2093cd36:f654020d
ARRAY /dev/md1 UUID=845b2454:42a319ef:6ec5238a:358c365b
ARRAY /dev/md2 UUID=23cf90d2:c05d720e:e72e178d:414a8128


If there is any other stuff such as name= devices= level= this can get in the way (if the hostname doesn't match [might not be set in initramfs], level changes after grow, device names change, etc.) so unless you need it for something specific, just get rid of it. UUID alone is enough to properly identify a RAID.

Sometimes there is a copy of mdadm.conf in your initramfs, so you also have to update initramfs after changing mdadm.conf
Back to top
View user's profile Send private message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Thu Apr 06, 2017 4:37 pm    Post subject: Reply with quote

This is what I have in mdadm.conf:
Code:
MAILADDR an@email.adress
MAILFROM an@email.adress
ARRAY /dev/md0 metadata=1.2 name=hostname:0 UUID=10cd45c9:810eb9f6:a418ae2c:213d6644
ARRAY /dev/md1 metadata=1.2 name=hostname:1 UUID=634527ad:1c661ef8:ecaf541f:9948c0ea


So I deleted: metadata= and name=

And I do not have an initramfs on this box. /boot just contains
kernel config files, System.map files and vmlinuz files

So I did:
Code:

cat /proc/mdstat
Personalities : [raid1]
md127 : active (auto-read-only) raid1 sdb[0]
      976631488 blocks super 1.2 [2/1] [U_]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md0 : active raid1 sdc1[0]
      976631424 blocks super 1.2 [2/1] [U_]
      bitmap: 5/8 pages [20KB], 65536KB chunk

mdadm --stop /dev/md127
mdadm --zero-superblock /dev/sdb1
-> mdadm: Couldn't open /dev/sdb1 for write - not zeroing
mdadm --zero-superblock /dev/sdb
cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[0]
      976630464 blocks super 1.2 [2/1] [U_]
      bitmap: 2/8 pages [8KB], 65536KB chunk

md0 : active raid1 sdc1[0]
      976631424 blocks super 1.2 [2/1] [U_]
      bitmap: 5/8 pages [20KB], 65536KB chunk

unused devices: <none>


Now I rebootet and guess what:

Code:
cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[0]
      976630464 blocks super 1.2 [2/1] [U_]
      bitmap: 2/8 pages [8KB], 65536KB chunk

md0 : active raid1 sdc1[0]
      976631424 blocks super 1.2 [2/1] [U_]
      bitmap: 5/8 pages [20KB], 65536KB chunk

unused devices: <none>


Wow, it finally works ?!

As I copied the files over, I will try to mount /dev/md1 to my userhome instead of old md0 and if everything is alright, I can finally destroy md0, reformat dev/sdc to get it alligned and ad it to md1 :)

Thank you very much NeddySeagon. Does that mean that in the end the metadata= and name= entries in mdadm.conf werd causing all this?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 44020
Location: 56N 3W

PostPosted: Thu Apr 06, 2017 5:22 pm    Post subject: Reply with quote

Elleni,

Its not the metadata, but it could be the name entry.
It could also be several sets of raid metadata on the same device at the same time.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Elleni
l33t
l33t


Joined: 23 May 2006
Posts: 914

PostPosted: Thu Apr 06, 2017 5:31 pm    Post subject: Reply with quote

I see, in any case:

BIG THANK YOU!!!!

For your great support :D
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum