HP (Hewlett-Packard) t2808-90006 manuel d'utilisation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104

Aller à la page of

Un bon manuel d’utilisation

Les règles imposent au revendeur l'obligation de fournir à l'acheteur, avec des marchandises, le manuel d’utilisation HP (Hewlett-Packard) t2808-90006. Le manque du manuel d’utilisation ou les informations incorrectes fournies au consommateur sont à la base d'une plainte pour non-conformité du dispositif avec le contrat. Conformément à la loi, l’inclusion du manuel d’utilisation sous une forme autre que le papier est autorisée, ce qui est souvent utilisé récemment, en incluant la forme graphique ou électronique du manuel HP (Hewlett-Packard) t2808-90006 ou les vidéos d'instruction pour les utilisateurs. La condition est son caractère lisible et compréhensible.

Qu'est ce que le manuel d’utilisation?

Le mot vient du latin "Instructio", à savoir organiser. Ainsi, le manuel d’utilisation HP (Hewlett-Packard) t2808-90006 décrit les étapes de la procédure. Le but du manuel d’utilisation est d’instruire, de faciliter le démarrage, l'utilisation de l'équipement ou l'exécution des actions spécifiques. Le manuel d’utilisation est une collection d'informations sur l'objet/service, une indice.

Malheureusement, peu d'utilisateurs prennent le temps de lire le manuel d’utilisation, et un bon manuel permet non seulement d’apprendre à connaître un certain nombre de fonctionnalités supplémentaires du dispositif acheté, mais aussi éviter la majorité des défaillances.

Donc, ce qui devrait contenir le manuel parfait?

Tout d'abord, le manuel d’utilisation HP (Hewlett-Packard) t2808-90006 devrait contenir:
- informations sur les caractéristiques techniques du dispositif HP (Hewlett-Packard) t2808-90006
- nom du fabricant et année de fabrication HP (Hewlett-Packard) t2808-90006
- instructions d'utilisation, de réglage et d’entretien de l'équipement HP (Hewlett-Packard) t2808-90006
- signes de sécurité et attestations confirmant la conformité avec les normes pertinentes

Pourquoi nous ne lisons pas les manuels d’utilisation?

Habituellement, cela est dû au manque de temps et de certitude quant à la fonctionnalité spécifique de l'équipement acheté. Malheureusement, la connexion et le démarrage HP (Hewlett-Packard) t2808-90006 ne suffisent pas. Le manuel d’utilisation contient un certain nombre de lignes directrices concernant les fonctionnalités spécifiques, la sécurité, les méthodes d'entretien (même les moyens qui doivent être utilisés), les défauts possibles HP (Hewlett-Packard) t2808-90006 et les moyens de résoudre des problèmes communs lors de l'utilisation. Enfin, le manuel contient les coordonnées du service HP (Hewlett-Packard) en l'absence de l'efficacité des solutions proposées. Actuellement, les manuels d’utilisation sous la forme d'animations intéressantes et de vidéos pédagogiques qui sont meilleurs que la brochure, sont très populaires. Ce type de manuel permet à l'utilisateur de voir toute la vidéo d'instruction sans sauter les spécifications et les descriptions techniques compliquées HP (Hewlett-Packard) t2808-90006, comme c’est le cas pour la version papier.

Pourquoi lire le manuel d’utilisation?

Tout d'abord, il contient la réponse sur la structure, les possibilités du dispositif HP (Hewlett-Packard) t2808-90006, l'utilisation de divers accessoires et une gamme d'informations pour profiter pleinement de toutes les fonctionnalités et commodités.

Après un achat réussi de l’équipement/dispositif, prenez un moment pour vous familiariser avec toutes les parties du manuel d'utilisation HP (Hewlett-Packard) t2808-90006. À l'heure actuelle, ils sont soigneusement préparés et traduits pour qu'ils soient non seulement compréhensibles pour les utilisateurs, mais pour qu’ils remplissent leur fonction de base de l'information et d’aide.

Table des matières du manuel d’utilisation

  • Page 1

    HP Serviceguard Extended Distance Cluste r for Li nux A.01 .00 Deploy ment Guide Manufacturing P art Number: T2808-9 0006 May 2008 Sec ond Edition[...]

  • Page 2

    2 Legal Notices © Copyrig ht 2006-20 08 Hewle tt-Packard Developmen t Company , L.P . Publicat io n Date: 2008 V alid licen se fro m HP require d for po ssessio n, use, or co pying. Consistent with F AR 12.211 and 12.212, Commercial Computer Software, Computer Softw are Documentation, and T echnical Data for Comme rcial Items are licensed to the U[...]

  • Page 3

    3 Cont ent s 1. Disaster T olerance and Recovery in a Ser viceguard Cluster Evaluati ng the Need for Disaster T olerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 What is a Disaste r T olerant Architecture? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 6 Underst anding Typ es of Disa ster T olera[...]

  • Page 4

    Con tent s 4 Creating a Multiple Di sk Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 T o Create and Assemble an MD Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Creating V olume Groups and Configuring VG Exc lusive Activa tion on the MD Mirror . 74 Configuring t[...]

  • Page 5

    5 Cont ent s[...]

  • Page 6

    Con tent s 6[...]

  • Page 7

    7 Printing History The printing date and part number indicate the current edition. The printing date changes when a new edit ion is printed. (Minor corrections and up date s which are in corp orate d at re print d o no t caus e the date t o chan ge.) The pa rt number changes when ex tensive tec hnical chan ges are incorporat ed. New editions of thi[...]

  • Page 8

    8 HP Prin ting Di vision: Business Critical Computing Business U nit Hewlett-Packard Co. 19111 Pruneridge Ave. Cupertino, CA 95014[...]

  • Page 9

    9 Preface This guide i ntroduces t he concept of E xtended Dist ance Clust ers (XDC). It describes how to configure and manage HP Servi ceguard Extended Distance Clusters for Linux and the associated Software RAID functio nality . In additi on, th is guide includes informatio n on a v ariety o f Hewle tt-Packard (HP) h igh availabil ity clus ter t [...]

  • Page 10

    10[...]

  • Page 11

    11 Preface Related Publicatio ns The followin g docume nts contain addit ional useful informa t ion: • Clusters f or High A vailability: a Primer of HP Solution s , Second Edition . He wlett- P ackard Prof essiona l Books: Prentice H all PTR, 2001 (ISBN 0 -13-08 9355-2) • Designing Disaster T olerant HA Clusters Using Metrocluster and Continent[...]

  • Page 12

    12[...]

  • Page 13

    Disaster T olerance and Recov ery in a S er viceguard Cluster Chapter 1 13 1 Disaster T olerance and Recovery in a Serviceguard Cluster This chap ter introd uces a v ariety of Hew lett-Packard high availab ility cluster tech nologies that provide disaster t olerance fo r your mission-criti cal applicatio ns. It is assumed that you are already famil[...]

  • Page 14

    Disaster T olerance and Recov er y in a S er viceguard Cluster Evaluat ing the Need f or Disaster T o lerance Chapter 1 14 Evaluating the Need for Disaster T olerance Disast er tole rance is the abi lity to rest ore applic ations an d data within a reasonable period of time after a disaster . Most people think of fire, flood, and earthqu ake as dis[...]

  • Page 15

    Disaster T olerance and Recov ery in a S er viceguard Cluster Evaluat ing the Need f or Disaster T o lerance Chapter 1 15 line inoperable as well as the computers. In this case d isaster recovery would be moot, and local failover is probably the more appropriat e level of prot ection. On the other hand, y ou ma y have an order pro cessing center t [...]

  • Page 16

    Disaster T olerance and Recov er y in a S er viceguard Cluster What is a Disaster T olerant Architecture? Chapter 1 16 What is a Disaster T olerant Architecture? In a Serviceguard c luster configuration, high a vailability is ac hieved by using red undant h ardware to el iminat e single points of fail ure. This protects the clust er against hardwa [...]

  • Page 17

    Disaster T olerance and Recov ery in a S er viceguard Cluster What is a Disaster T olerant Ar chitecture? Chapter 1 17 impact. F or these types o f inst allati ons, and many mo re like t hem, it i s impo r tan t to gu ard no t only agai nst sin gle points of fai lure , bu t agai nst multiple points of failure (MPOF) , or against single ma ssive f a[...]

  • Page 18

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 18 Understanding Types of Disas ter T oler ant Clust ers T o protect agai nst mul t ip le point s of fail ure , c lust er compon ent s mus t be geographic a lly dispe rsed: nodes can be put in different ro oms, on differen [...]

  • Page 19

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 19 architecture are followed. Ext ended distance clusters were forme rly known as camp us cluste rs , but th at term is not al wa ys appropri ate because the supported distances have increased beyond the typical size of a si[...]

  • Page 20

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 20 F igure 1-3 Extended D istance Cluste r In the above configuration the net work and FC links betwee n the data center s a re combined an d se nt over commo n DWDM links . T wo DWD M links provide redund ancy . When one o[...]

  • Page 21

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 21 F igure 1-4 Two Data Cent er Setup Figure 1-4 sho ws a config urat ion th at is supp orted with separ ate network and FC links between the da ta centers . In this configuration, the FC li n ks and the Et hern et netw ork [...]

  • Page 22

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 22 Also note that the network ing in the configuration sho wn is the minimum. Added network connect ions for additional hea rtbeats are recommended. Benefits of Extended Dist ance Cluster • This configuration impl ements [...]

  • Page 23

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 23 Cluster Extension (CLX) Cluster A Linux CLX cluster is simila r to an HP-UX metropolitan cluster and is a cluster that has alt ernate nodes located in different parts of a city or in nearby cities. Putting nodes further a[...]

  • Page 24

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 24 Figure 1-5 shows a CLX for a Linux S ervice guard cluster architecture. F igure 1-5 CLX for Linux Serviceguard Cluster A key difference between extended dist ance clusters and CLX clusters is the data re plication tec hn[...]

  • Page 25

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 25 Benefi ts of CLX • CLX offe rs a more resili ent so lution t han Ext ended Distanc e Clust er , as it pro vides co mplete in tegrati on bet ween Serv iceguard’s applicatio n package and the dat a replica tion s ubsyst[...]

  • Page 26

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 26 • D isk resy nchron ization is indep endent o f CPU failure (th at is, if the hosts a t the primar y site fail b ut the disk rema ins up, the disk kno ws it does not h ave to be resync hronized). Differences Between Ex[...]

  • Page 27

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 27 "objective" can be set for the recove ry point such that if data is upda ted for a pe rio d le ss th an the ob je cti ve, auto mat ed f ai love r can occur and a package w ill start. If the time is lon ger than [...]

  • Page 28

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 28 F igure 1-6 Continental Cluster Contine ntalclus ters p rovi des the flexi bility to wo rk with any data replica tion mec hanism. It prov ides pr e-integrat ed soluti ons that use H P StorageW orks Continuous Access XP ,[...]

  • Page 29

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 29 Benefi ts of Continenta lclusters • Y ou can virtua lly build data center s anywhere and still have the data centers prov ide disast er tolerance for ea ch other . Since Continen talcluster s uses two clusters, theo ret[...]

  • Page 30

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 30 replica te the data between t wo data cente rs . HP provid es a suppo rted inte gration toolki t for Ora cle 8i St andby DB in t he Enterprise Cluster Management T oolkit (ECMT). • RAC i s supported by Conti nentalclus[...]

  • Page 31

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 31 T able 1-1 Comparison of Disas ter T olerant Cluster Solutions Attributes Extended Distance Cluster CLX Contin entalclusters (HP-UX only) Key Be nefit Excelle nt in “no rmal” operations , and part ial failure . Since [...]

  • Page 32

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 32 Key Limita tion N o abilit y to che ck the state o f the da ta before starting up th e appli cat ion. If the volume group (v g) can be act iva ted, th e appli cat ion will b e started. If mirrors are split or multiple pa[...]

  • Page 33

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 33 Maximum Distance 100 K ilome ters Shorte st of the di stan ces betwe en: •C l u s t e r n e t w o r k laten cy (not to exceed 200 ms). • D ata Rep lication Max Dis tance . •D W D M p r o v i d e r max distance. No d[...]

  • Page 34

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 34 Applicatio n F ailover type Automatic (no manual interv ention requi r ed). Automatic (no manual intervention required). Semi-a utoma tic ( user must “pus h the button” to init iate recovery). Access Mode for a packa[...]

  • Page 35

    Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 35 Data Replicatio n Link Dark Fiber Dark Fiber Continuous Access over IP Continuous Access over ATM WA N LAN Dark Fiber (pre-integra ted solut ion) Continuous Acces s over IP (pre-integra ted solut ion) Continuous Acces s o[...]

  • Page 36

    Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 36 DTS Software/ Licen ses Required SGLX + XDC SGLX + CLX XP or CLX EV A SG + Contine ntalcluste rs + (Metroc luster Continuous Acces s XP or Metro clust er Continuous Acces s EV A or Metro clust er EMC SRDF or Enterprise C[...]

  • Page 37

    Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 37 Disaster T olerant Architec ture Guidelines Disa ster t olera nt arc hitec tures repr esen t a shif t away from the mass ive central data centers and tow ards more distributed data processing facilitie s . Whil e each archi tecture[...]

  • Page 38

    Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 38 Protecting Data through Replication The mo s t sig nifi cant l osses d uring a d isas ter are the lo ss of ac cess to data, an d the lo ss of d ata it self. Y ou prot ect agai nst th is loss through data repl ication, that is , c[...]

  • Page 39

    Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 39 depending on the volume of data. So me applicat ions, depending on the role they play in the bus iness, ma y need to have a faster recovery t ime, within hours or eve n minutes. On-line Data Replication On-line d ata rep lication i[...]

  • Page 40

    Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 40 F igure 1-7 Physical Data Replication MD Software RAID is an example of phys ical replica tion done in t he software; a disk I /O is w ritte n to e ach array co nnecte d to t he no de, requiring the nod e to make multip le disk I[...]

  • Page 41

    Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 41 • Th e logical order of data w rites is not al ways maintained in synchronou s repl ication. When a re plication link g oes down and transac tions contin ue at the primar y site, writ es to the pr imar y disk are queu ed in a bit[...]

  • Page 42

    Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 42 • Because there a re multiple read devices , tha t is , the node ha s access to both copies of data , there ma y be imp r ovements in read performance . • Writes are synchrono us unless the l ink or disk is down. Disadvantage[...]

  • Page 43

    Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 43 F igure 1-8 Logical Data R eplication Advantages of using logical replic ation are : • The distance betwee n nodes is limited only by the networking technol ogy . • There is no additional hardware needed to do logical replicati[...]

  • Page 44

    Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 44 • I f the pri mary data base fails and is co rrupt, which res ults in t he repli ca t aking over , t hen the p ro cess for r est ori ng t he pr im ary dat abas e so t hat i t can be use d as th e re plic a is co mple x. Th is o[...]

  • Page 45

    Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 45 F igure 1-9 Alternative P ower Sources Housing re mote n odes in anothe r building o ften implies they are powered by a different circuit, so it is especially important to make sure all nodes are powered from a diffe rent source i [...]

  • Page 46

    Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 46 Disas ter T olerant Loc al Area N etwo rking Ethernet networks can also be used to connect nodes in a disaster tolerant a rchitect ure with in the f ollowi ng guide lines: • Each node is conne cte d to redund ant sw itc hes and[...]

  • Page 47

    Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 47 Disaster T olerant Cluster Limitations Disast er tolera nt clusters h ave limitatio ns, some of which ca n be mitigated b y good planni ng. So me examples of MPOF that may not be covered by disaster tolerant con figuration s: • F[...]

  • Page 48

    Disaster T olerance and Recov er y in a S er viceguard Cluster Managi ng a Disaster T olerant En vir onment Chapter 1 48 Managing a Di saster T olera nt Environment In addition to the changes in hardware and so ft ware to create a disaster tolerant archite cture, t here are also c hanges in the w ay you mana ge the environmen t. Configuration of a [...]

  • Page 49

    Disaster T olerance and Recov ery in a S er viceguard Cluster Managi ng a Disaster T olerant En vir onment Chapter 1 49 Even if recovery is automated, you may choose to , or need to recover from some types of di sasters with manual r ecovery . A rolling disa ste r , which is a disas ter that happens befo re the cluster has recov ered from a prev io[...]

  • Page 50

    Disaster T olerance and Recov er y in a S er viceguard Cluster Additional Disaster T olerant Solutions Inf ormation Chapter 1 50 Additional Disaster T olerant Solutions Information On- line v ersio ns o f HA doc ument atio n are a vail able a t http:// docs.h p.com -> High Availabilit y -> Serv iceguard fo r Linux. F o r information on CLX fo[...]

  • Page 51

    Building an Extended Distance Cluster Using Ser viceguard and Software RAID Chapter 2 51 2 Building an Extended Distance Cluster Using Serviceguard and Software RAID Simple Se rviceguard clusters are usua lly configure d in a sing le data center , o ften in a sin gle ro om, to pr ovide prot ectio n again st fa ilur es in CPUs , interface cards , an[...]

  • Page 52

    Building an Extended Distance Cluster Using Ser vic eguard and Software RAID T ypes of Data Link f or Stor age and Netwo rking Chapter 2 52 Types of Data Link for Storage and Networking Fib re Channel t echnolog y lets you increase th e distan ce betw een the c om p on e n t s i n a n S e r v i c e g ua r d c l u s t e r , th u s m a k i n g it p o[...]

  • Page 53

    Building an Extended Distance Cluster Using Ser viceguard and Software RAID T wo Data Center and Qu orum Service Location Ar chitectures Chapter 2 53 Two Data Center and Quorum Service Location Architectur es A two dat a center and Quorum Service location , which is at a third location, have the following configura tion requirements: NO TE There i [...]

  • Page 54

    Building an Extended Distance Cluster Using Ser vic eguard and Software RAID T wo Data Center and Quorum Service Locat ion Ar chitectures Chapter 2 54 • Fibre Channel Direct F a bric Attach (DF A) is recommende d ove r Fibr e C hannel Arbit r ated loop co nfigurations , due t o the superior perfo rma nce of DF A, es peci ally a s th e dista nc e [...]

  • Page 55

    Building an Extended Distance Cluster Using Ser viceguard and Software RAID T wo Data Center and Qu orum Service Location Ar chitectures Chapter 2 55 F igure 2-1 Two Data Centers and Third Location with D WDM and Quorum Server Figure 2-1 is an exa mple of a two da ta c enter and t hird locati on configuratio n using D WDM, with a quorum se rver nod[...]

  • Page 56

    Building an Extended Distance Cluster Using Ser vic eguard and Software RAID T wo Data Center and Quorum Service Locat ion Ar chitectures Chapter 2 56 There are no requirement s for the dis t ance betwe en the Quorum Serv er Data center and the P r imary Data Centers , however it is necessary to ensure that the Quorum Server can be contacted within[...]

  • Page 57

    Building an Extended Distance Cluster Using Ser viceguard and Software RAID Rules f or Separate N etwork and D ata Links Chapter 2 57 Rules for Separate Network and Data Lin ks • Th ere must be less than 200 milli seconds o f latenc y in t he netwo rk betwe en the d ata ce nter s. • No rout ing is allowe d for t he netw orks betw een the da ta [...]

  • Page 58

    Building an Extended Distance Cluster Using Ser vic eguard and Software RAID Guidelines on D W DM Links fo r Network and Data Chapter 2 58 Guidelines on DWDM Links for Netwo rk and Data • Th ere must be less than 200 milli seconds o f latenc y in t he netwo rk betwe en the d ata ce nter s. • No rout ing is allowe d for t he netw orks betw een t[...]

  • Page 59

    Building an Extended Distance Cluster Using Ser viceguard and Software RAID Guidelines on D WDM Links for Netw ork and Data Chapter 2 59 • Fibre Channel switc hes must be used in a DWDM configura tion; Fibre Cha nnel hubs are not su pport ed. Direct F abric Attach mode must be used for the ports connected to the DWDM link. See the H P Configurati[...]

  • Page 60

    Building an Extended Distance Cluster Using Ser vic eguard and Software RAID Guidelines on D W DM Links fo r Network and Data Chapter 2 60[...]

  • Page 61

    Configur ing your En vironment f or Software RAID Chapter 3 61 3 Configuring your Environment for Software RAID The previ ous c hapters di scussed conce ptual informatio n on disaster tolerant a rchitect ures and proced ural in formation on creatin g an extended dista nce cl uster . Th is c hapter d iscusses the p rocedures you need to follow to co[...]

  • Page 62

    Configuring your Environment f or Software RAID Under standing Softwar e RAID Chapter 3 62 Underst an d ing So ftwa re R AI D Redundant Array of Independent Disks (RAID) is a mechanism that provides storage fault tolerance and , occasiona lly , better performance . Softw are RAID is design ed on the concept of RAID 1. RAID 1 use s mirror ing wher e[...]

  • Page 63

    Configur ing your En vironment f or Software RAID Installing t he Extended Distance Cluster Software Chapter 3 63 Installing t he Extended Distance Cluster Software This se ction disc usses the s upported opera ting sys tems , prereq uisite s and the pro cedures for instal ling the Exten ded Distance C luster softwar e. Supported Operating Systems [...]

  • Page 64

    Configuring your Environment f or Software RAID Installing the Ext ended Distance Cluster Sof tware Chapter 3 64 Compl ete the fo llow ing p roce dur e to in stall XDC: 1. Insert the pr oduct CD into th e driv e and mou n t the CD . 2. Open the command l ine inte rface. 3. If you are ins talling X DC on R ed Hat 4 , run the f ollowing co mmand: # r[...]

  • Page 65

    Configur ing your En vironment f or Software RAID Installing t he Extended Distance Cluster Software Chapter 3 65 In the outpu t, the produ ct name , xdc -A.01 .00-0 will be listed. The presence of th is file verifies that the instal latio n is succe ssful .[...]

  • Page 66

    Configuring your Environment f or Software RAID Configur ing the En vironment Chapter 3 66 Configuring the Environment After setting up the hardw are as described in the Extend ed Distance Cluster Archit ecture section and ins talling t he Exten ded Dis tance Cluster so ftware, complete the followin g step s to en able So ftware RAID for each packa[...]

  • Page 67

    Configur ing your En vironment f or Software RAID Configur ing the En vironment Chapter 3 67 that a re of ide n tical si zes . Diff erences i n disk se t size re sults in a mirror being cre ated of a s ize equal to the smal ler of the t wo disks . Be sure to create the mirror us ing the p ersistent d evice name s of th e component devices . F or mo[...]

  • Page 68

    Configuring your Environment f or Software RAID Configur ing the En vironment Chapter 3 68 • Ensure that th e Quorum Serv er link i s close to th e Ethernet l inks in your setup. In cases of fail ures of all Et hernet and Fibre channel links, the node s can easily access the Quorum Server for arbitration. • The Quorum S erver is configured in a[...]

  • Page 69

    Configur ing your En vironment f or Software RAID Configuring Mult iple Pa ths to Storage Chapter 3 69 Configuring Multiple P aths to Sto rag e HP require s that yo u configu re multi ple pa ths to the stora ge de vice using the QLogic HBA driver as it has inbuilt multipath capabilitie s. Use the ins tall script with the “- f ” option t o enab [...]

  • Page 70

    Configuring your Environment f or Software RAID Configur ing Multiple P aths t o Stora ge Chapter 3 70 The QLogic cards are configured t o ho ld up any disk access and essential ly hang for a time period whic h is greater than the c luster reformation time w hen access to a disk is l ost. This is achieved by altering the Link Down Timeout va lue fo[...]

  • Page 71

    Configur ing your En vironment f or Software RAID Using Persi stent Device Names Chapter 3 71 Using P ersistent Device Names When there i s a disk rel ated failure a nd subseq uent reboot, th ere is a possi bility tha t the d evi ces are rena med. Linux name s di sks i n th e orde r they are f ound. The dev ice that w as /dev /sdf may be renamed to[...]

  • Page 72

    Configuring your Environment f or Software RAID Creating a Multiple Disk De vice Chapter 3 72 Creating a Multiple Disk Device As mention ed earlier , the first step for enab ling Softw are RAID in your environment is to create the Multiple Disk (MD) device using two underlying componen t disks. This MD d evice is a virt ual dev ice which ensur es t[...]

  • Page 73

    Configur ing your En vironment f or Software RAID Creating a Multip le Disk Device Chapter 3 73 2. Assemble the MD device on the o ther node by running the following command: # mdadm -A -R /dev/md 0 /dev/h pdev/sde1 /dev/hp dev/sd f1 3. Stop the MD device on the ot her no de by running the f ollowing command: # mdadm -S /d ev/md0 Y ou must st op th[...]

  • Page 74

    Configuring your Environment f or Software RAID Creating V olume Gr oups and Configuring V G Exc lusive Activ ation on the MD Mirr or Chapter 3 74 Creating V olume Groups and Conf iguring VG Exclusive Ac tivation on the MD Mi rror Once you cre ate the MD mirror d evice , you ne ed to create vo lume g roups and logical v olumes o n it. NO TE XDC A.0[...]

  • Page 75

    Configur ing your En vironment f or Software RAID Creating V olume Gr oups and Con figuring V G Exclusive Ac tivati on on the MD Mirr or Chapter 3 75 Found d uplica te PV 9w 3TIxKZ6l FRqWUmQm9 tlV5nsdU kTi4i: using /dev/sd e not /dev/sdf Wi th this error , you cannot create a new volume group on /dev/md0 . As a result, you must create a filter for [...]

  • Page 76

    Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 76 Configuring the P ackage Control Script an d RAID Configuration F ile This section describes the package control scripts and configuration files that yo u need to creat e a nd edit to enable Sof twar e RAID in y our [...]

  • Page 77

    Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 77 # Specify the method of activation and deactivation for md. # Leave the default (RAIDSTART="raidst art", "RAIDSTOP="raidstop") if y ou want # md to be started and stopped with de fault metho[...]

  • Page 78

    Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 78 T o Edit the XDC_CONFIG FILE parameter In additio n to modifying the DATA _REP variable, you must also s et XDC_CON FIG_FI LE to spec ify th e raid. conf fi le for this pa ckage. This file resides in the package dire[...]

  • Page 79

    Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 79 more time elapse s than w hat is sp ecifie d for RPO_TARG ET , the package is prevente d from start ing on the remote no de (assuming that the node stil l has acce ss only to its o wn half of the mirror). By default[...]

  • Page 80

    Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 80 F or example, let us as sume tha t the dat a storag e links in Figu re 1-4 fail befo re the hear tbeat lin ks fail . In thi s case, af ter the time specif ied by Link_Do wn_Timeo ut has el apsed , a pa ckage i n Data[...]

  • Page 81

    Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 81 Now conside r an XDC configurat ion such as that shown in F igure 1-3 (DWDM links betw een data cent ers). If DC1 fails such tha t links A and B both fa i l simult aneous l y , and DC1's conne cti on to th e Qu[...]

  • Page 82

    Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 82 Again, if the network is set up in suc h a way that when the links betwe en the s ites f ail, th e com munic ation li nks to the ap plica tion clients are also shut dow n, then the unintend ed writes are no t acknowl[...]

  • Page 83

    Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 83 • RAID_MONITOR_INTERV AL This parameter defines the time interval, in seconds , the raid monitor scri pt waits betw een each che ck to verify acce ssibilit y of both component devices of all mirror devices used by[...]

  • Page 84

    Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 84[...]

  • Page 85

    Disaster Scenarios and Their Handling Chapter 4 85 4 Disaste r Sc enarios and Their Handling The pr evious cha pters provi ded informat ion on depl oying Soft ware RAID in your en vironmen t. In th is chapter , you w ill find i nformation on ho w Softw are RAID addr esses variou s disaster sc enarios . All the dis aster scenarios described in this [...]

  • Page 86

    Disaster Scenar ios and Their Handling Chapter 4 86 The fo llow ing ta ble li sts al l the disas ter sce nario s that are h andle d by the Ext ended Dist ance Clust er softw are . All the scenari os assume that the s etup is th e same a s the one describ ed in “Exte nded Dis tance Clust ers” o n page 18 of this d ocument . T able 4-1 Disaster S[...]

  • Page 87

    Disaster Scenarios and Their Handling Chapter 4 87 A packag e (P1) is runni ng on a node ( Node 1 ). The package uses a mirror (md0) that consis ts of two stor ag e com pone nt s - S1 (local to Node 1 - /dev/hpdev/mylink-sde ) and S2 (local to Node 2). Access to S1 is lost fro m both nodes , either due to power failure to S1 o r loss of FC links to[...]

  • Page 88

    Disaster Scenar ios and Their Handling Chapter 4 88 A packag e (P1) is runni ng on a node ( Node 1 ). The package uses a mirror (md0) that consis ts of two stor ag e com pone nt s - S1 (local to Node 1 - /dev/hpdev/mylink-sde ) and S2 (local to Node 2) Data center 1 that c onsists of Node 1 and P1 ex periences a failu re. NO TE: In this example, fa[...]

  • Page 89

    Disaster Scenarios and Their Handling Chapter 4 89 This i s a mult iple fail ure scena rio wh ere the f ailur es occur i n a particular se quence in the con figuratio n that correspo nds to figure 2 where Ethernet and FC l inks do no t go over DWDM . The pac kage (P1) is running o n a node (N1) . P1 uses a m ir r or md0 con sisting o f S1 (local to[...]

  • Page 90

    Disaster Scenar ios and Their Handling Chapter 4 90 This i s a mult iple fail ure scena rio wh ere the f ailur es occur i n a particular se quence in the con figuratio n that correspo nds to figure 2 where Ethernet and FC l inks do no t go over DWDM . The RPO_TARGET for the pac k age P1 is s et to IGNORE . The p ac kage i s r unning on Nod e 1. P1 [...]

  • Page 91

    Disaster Scenarios and Their Handling Chapter 4 91 This fa ilure is t he same as the previo us f ailu re excep t tha t the packag e (P 1) i s c onfi gur e d wit h RPO_TARGET set to 60 seconds . In thi s case, initia lly the packag e (P1) is runni ng on N 1. P1 us es a m irr or md0 con sist ing of S1 (loc al to node N1 - /dev/hpdev/mylink-sde ) and [...]

  • Page 92

    Disaster Scenar ios and Their Handling Chapter 4 92 In this case, the package (P1) runs with RPO _TARGET set to 60 seco nds. P acka ge P1 is runnin g on node N1. P1 uses a mirro r md0 consisting of S1 (loc al to node N1, for exa mple /dev/hpdev/mylink-sde ) and S2 (local to no de N2). The first failure oc curs when all FC links between two data cen[...]

  • Page 93

    Disaster Scenarios and Their Handling Chapter 4 93 This s cenario is a n extensi on of the pre vio u s fail ure s cena rio. In the previo us sce nario, when the package f ails ov er to N2, it does not start as the value of RPO_TARGET would have been exceed ed. T o forceful ly sta r t th e packag e P1 on N 2 w hen th e F C lin ks are not r estored o[...]

  • Page 94

    Disaster Scenar ios and Their Handling Chapter 4 94 In this case, the package (P1) runs with RPO -TARGET set to 60 seco nds. In thi s case, initia lly the pack age ( P1) i s r unn ing on n ode N1. P1 uses a mirro r md0 consisting of S1 (loc al to node N1, for exa mple /dev/hpdev/mylink-sde ) and S2 (local to no de N2). The first failure oc curs whe[...]

  • Page 95

    Disaster Scenarios and Their Handling Chapter 4 95 In thi s case, initia lly the pack age ( P1) i s r unn ing on n ode N1. P1 uses a mirro r md0 consisting of S1 (loc al to node N1, for exa mple /dev/hpdev/mylink-sde ) and S2 (local to no de N2). The first failure occu rs with all Eth ernet links between the two data center s failing . With this fa[...]

  • Page 96

    Disaster Scenar ios and Their Handling Chapter 4 96[...]

  • Page 97

    Managing an MD De vice Appendix A 97 A Managing an MD Devi ce This chapte r includes addition al inf ormation on how to manage t he MD device . F or the latest information on how to manage and MD device, see The S oftware-RAID HOWTO ma nual availa ble at: http:// www.tl dp.org/H OWTO/Sof tware-RAI D-HOW TO.html F ollowin g are the top ics di scus s[...]

  • Page 98

    Managing an MD De vice Vie wing the Status of the MD Device Appendix A 98 V iewing the Status of the MD Device After creating an MD device, yo u can view its status. By doing so , you can remain info rmed of wheth er the device is clean, up and running , or i f there ar e an y erro r s. T o view the status of the MD de vice, run the fol l owing com[...]

  • Page 99

    Managing an MD De vice Stoppin g the MD Device Appendix A 99 Stopping the MD Device After you creat e an MD dev ice, it begins to run. Y ou need to s top the device an d add the config uration into the rai d.conf fil e. T o stop t he M D device , run the following command: # mdadm -S <m d_device name> When you stop this device, all reso urces[...]

  • Page 100

    Managing an MD De vice Starting the MD Device Appendix A 100 Starting the MD Device After you c reate an MD devi ce, yo u would need t o s top and start t he MD device to en sure that it is active. Y ou woul d not need to start the MD device in a ny othe r scenari o as this is han dled by the XDC s oftware. T o start the MD device, run the follo wi[...]

  • Page 101

    Managing an MD De vice Removing and Adding an MD Mirr or Co mponent Disk Appendix A 101 Removi ng and Adding an MD Mirror Component Di sk There are ce rtain failure scenarios , where y ou would need t o manually remove the mirror component of an MD d evice and add it again later . F or example , if links between two data cent ers fail, you would ne[...]

  • Page 102

    Managing an MD De vice Remo ving and Add ing an MD M irror C omponent D isk Appendix A 102 Example A-3 Removing a fail ed MD component d isk from /dev /md0 array T o remove a f ailed MD co mpon ent d isk f rom /dev/md0 , run the follow ing command: # mdadm –-rem ove /dev /md0 /de v/hpdev/s de F ollowin g is an exa mple of the s tatus mes sage tha[...]

  • Page 103

    Index 103 A asynchronous data replication , 39 C clus te r exten ded di stance , 22 FibreCha nnel , 52 metropolitan , 23 wide area , 27 cluster main tenan ce , 49 configuring , 46 disaster t olerant Ether net network s , 46 disaster tolerant W AN , 46 consistency of da ta , 38 continen tal clus ter , 27 currency of data , 38 D data cente r , 17 dat[...]

  • Page 104

    Index 104 persistent device names , 66 physica l data replicati on , 39 power sources redundant , 44 Q QLogic cards , 70 R RAID Mo nitor ing Se rvice Configure , 78 raid.conf file Edit , 78 RAID_MONITOR_INTERV AL , 83 recoverability of data , 38 redundant p ower sources , 44 rep lica ting data , 38 off-line , 38 online , 39 rollin g disa ster , 49 [...]