Combining computers into a windows 10 cluster. Desktop cluster. Installing Active Directory

tBVPFBFSH ABOUT PDOK NBYE HCE OE NAPAP
YMY DEMBEN LMBUPET H DPNBYOYI HUMPCHYSI.

1. CHEDEOIE

NOPZYE Y ChBU YNEAF H MPLBMSHOPC UEFY OEULPMSHLP Linux NBYYO, U RTBLFYUEULY CHUEZDB UCHPVPDOSHCHN RTPGEUUPTPN. fBLTSE NOPZYE UMSCHYBMY P UYUFENBI, CH LPFPTSCHI NBYYOSCH PVIEDEOSAPHUS CH PYO UHRETLPNRSHAFET. OP TEBMSHOP NBMP LFP RTPVPCHBM RTPCHPDYFSH FBLIE LURETYNEOFSHCH X UEVS ABOUT TBVPFE YMY DPNB. dBCHBKFE RPRTPVKHEN CHNEUFE UPVTBFSH OEPPMSHYPK LMBUFET. rPUFTPYCH LMBUFET CHSC UNPCEFE TEBMSHOP HULPTYFSH CHSHCHRPMOOEOYE YUBUFY BDBYu. obrtynet LPNRYMSGYA YMY PDOCHTENEOOHA TBVPFKH OEULPMSHLYI TEUKHTUPENLYI RTPGEUUPCH. h LFPC UFBFSH S RPUFBTBAUSH TBUULBBFSH CHBN LBL NPTsOP VE PUPVSCHI KHUYMYK PVYAEDEOYFSH NBYYOSCH UCHPEK MPLBMSHOPK UEFY CH EDYOSCHK LMBUFET ABOUT VBE MOSIX.

2. LBL, UFP Y ZDE.

MOSIX - LFP RBFYU DMS SDTB Linux U LPNRMELFPN HFYMYF, LPFPTSHK RPCHPMSEF RTPGEUUBN U CHBYEK NBYYOSCH RETEIPDYFSH (NYZTYTPCHBFSH) ABOUT DTHZYE HHMSCH MPLBMSHOPK UEFY. CHЪSFSH EZP NPTsOP RP BDTEUKH HTTP://www.mosix.cs.huji.ac.il B TBURTPUFTBOSEPHUS BY CH YUIDDOSCHI LPDBI RPD MYGEOJEK GPL. RBFUY UHEEUFCHHAF DMS CHUEI SDETH YUUFBVMSHOPK CHEFLY Linux.

3. HUFBOPCHLB RTPZTBNNOPZP PVEUREYEOIS.

h OBYUBME HUFBOPCHLY IPYUH RPTELPNEODPCHBFSH CHBN ЪBVYTBFSH U HMB MOSIX OE FPMSHLP EZP, OP Y UPRKHFUFCHHAEYE HFYMYFSCH - mproc, mexec Y DT.
h BTIYCHE MOSIX EUFSH HUFBOCHPUOSCHK ULTYRF mosix_install. OE BBVHDSHFE H PVSBFEMSHOPN RPTSDLE TURBEBFCHBFSH YUIPDOCTE SDTB SDTB CH /USR/SRC/Linux-*, Ovrc/Linux Udtbem C - h /usr/src/linux-2.2.13 DBME OBBRKHULBEFE MOSIX_INSTALLY PFCHUBEFEF About Chuhe ENH UCHPK NEOEDTSET ЪBZTKHЪLY (LILO), RHFSH L YUIPDOILBN SDTB Y KHTPCHOY BRHULP.
RTH OBUFTPCLE SDTB CHLMAYUFE PRHYY CONFIG_MOSIX, CONFIG_BINFMT_ELF AND CONFIG_PROC_FS. CHUE LFY PRGYY RPDTPVOP PRYUBOSCH CH THLPCHPDUFCHE RP HUFBOPCLE MOSIX.
HUFFBOCHYMY? oX UFP CE - RETEZTHTSBKFE CHBY Linux U OPCHSHCHN SDTPN, OBCHBOIE LPFPTPZP PYUEOSH VHDEF RPIPTS ABOUT mosix-2.2.13.

4. OBUFTPKLB

JOBYUBMSHOP HUFBOCHMEOOOSCHK MOSIX UCHETIEOOOP OE OBEF, LBLIE X CHBU NBYOSCH CH UEFY Y U LEN ENH UPEDEOSFUS. OH B OBUFTBYCHBEFUUS FFP PYUEOSH RTPUFP. eUMY CHS FPMSHLP RPUFBCHYMY mosix Y EUMY CHBY DYUFTYVHFICH - SuSE YMY RedHat - UPCHNEUFYNSCHK, FP ЪBIPDYFE CH LBFBMPZ /etc/rc.d/init.d Y DBCHBKFE LPNBODH mosix start. rTY RETCHPN ЪBRHULE FFPF ULTYRF RTPUYF CHBU OBUFTPYFSH MOSIX Y ЪBRHULBEF FELUFPCHSHCHK TEBLFPPT DMS UPDBOYS JBKMB /etc/mosix.map, CH LPFPTPN OBIPDYFUS URYUPL HIMPCH CHBYEZP. fHDB RTPRYUSCHCHBEN: CH UMHYUBE, EUMY H CHBU CHUEZP DCHE-FTY NBYYOSCH Y YI IP-BDTEUB UMEDHAF
DTHZ ЪB DTHZPN RP OPNETBGYY RYYEN FBL:



1 10.152.1.1 5

ZDE RETCHSHCHK RBTBNEFT PVPOBYUBEF OPNET OBYUBMSHOPZP HMB, CHFPTPK - IP BDTEU RETCHPZP HMB Y RPUMEDOYK - LPMYUEUFCHP HHMPCH U FELHEEPK. f.E. UEKYUBU X OBU H LMBUPETE PMHYUBEFUS RSFSH HHMPCH, IP BDTEUB LPFPTSCHK BLBOYUYCHBAFUS ABOUT 1, 2, 3, 4 Y 5.
yMY DTHZPK RTYNET:

OPNET HMB IP LPMYUEUFCHP HHMPH U FELHEESP
______________________________________
1 10.152.1.1 1
2 10.150.1.55 2
4 10.150.1.223 1

h FFK LPOJYZHTBGYY NS RPMHYUN UMEDHAEIK TBULMBD:
IP 1-PZP HMB 10.150.1.1
IP 2-PZP HMB 10.150.1.55
IP 3-PZP HMB 10.150.1.56
IP 4-PZP HMB 10.150.1.223
FERETSH OHTSOP ABOUT CHUEI NBYOBI VHDHEEZP LMBUFETB HUFBOPCHYFSH MOSIX Y UPDBFSH CHEDE PDYOBLPCHSCK LPOZHYZHTBGIPOOSCHK JBKM /etc/mosix.map .

FERETSh RPUME RETEBRHULB mosix CHBYB NBYOB HCE VKhDEF TBVPFBFSH H LMBUPETE, UFP NPTsOP HCHYDEFSH ЪBRHUFYCH NPOYFPT LPNBODPK mon. h UMHYUBE, EUMY CHSH HCHYDYFE CH NPOYFPTE FPMSHLP UCHPA NBYYOKH YMY CHPPVEE OE HCHYDYFE OILPZP, FP, LBL ZPCHPTYFUS - OBDP TSCHFSH. ULPTEE CHUEZP X CBU PYVLB YNEOOP H /etc/mosix.map.
OH CHPF, HCHYDYMY, OP OE RPVEDYMY. uFP dbmshye? b DBMSHYE PYUEOSH RTPUFP :-) - OHTSOP UPVTBFSH HFIMYFSCH DMS TBVPFSCH U YNEOEOOOSCHN /proc Ъ RBLEFB mproc. h YUBUFOPUFY B FPN RBLEFE YDEF OERMPIBS NPDYZHYLBGYS top - mtop, B LPFPTSCHK DPVBCHYMY CHPNPTSOPUFSH PFPVTBTSEOYS HMB (node), UPTFYTPCHLY RP HMBN, RETEOPUB RTPGEUUB have FELHEEZP HMB ON DTHZPK J HUFBOPCHMEOYS NYOYNBMSHOPK BZTHLY RTPGEUUPTB HMB, RPUME LPFPTPK RTPGEUUSCH OBYUYOBAF NYZTYTPCHBFSH ON DTHZYE MOSIX - HMSCH .
BRHULBEN mtop, CHSCHVYTBEN RPOTBCHYCHYYKUS OE URSEYK RTPGEUU (TELPNEODHA BRHUFYFSH bzip) J UNEMP DBCHYN LMBCHYYH "g" ON CHBYEK LMBCHYBFHTE, RPUME YUEZP CHCHPDYN ON BRTPU PID CHSCHVTBOOPZP B LBYUEUFCHE TSETFCHSCH RTPGEUUB J BFEN - OPNET HMB, LHDB NShch IPFYN EZP PFRTBCHYFSH. b XCE RPUME LFPZP CHOYNBFEMSHOP RPUNPFTYFE ABOUT TEEKHMSHFBFSCH, PFPVTTBTSBENSCHE LPNBODPK mon - FB NBYOB DPMTSOB OBYUBFSH VTBFSH ABOUT UEVS OBZTHЪLH CHSHCHVTBOOPZP RTPGEUUB.
b UWUFCHEOOP mtop - H RPME #N PFPVTBTSBFS OPNET HMB, HERE ON CHSCRPMOSEPHUS.
OP LFP EEE OE CHUE - CHEDSH CHBN RTBCHDB OE IPYUEFUS PFRTBCHMSFSH ABOUT DTHZYE HHMSCH RTPGEUUSCH CHTHYuOHA? NEW OE BIPFEMPUSH. x MOSIX EUFSH OERMPIBS CHUFTPEOOBS VBMBOUYTPCHLB CHOHFTY LMBUFETB, LPFPTBS RPCHPMSEF VPMEE-NEOEEE TBCHOPNETOP TBURTEDEMSFSH OBZTH'LH ABOUT CHUE KHMSCH. OH B CHPF ЪDEUSH OBN RTYDEFUS RPFTHDYFUS. dms OBYUBMB S TBUULBTSH, LBL UDEMBFSH FPOLHA OBUFTPCLH (tune) DMS DHHI HHMPH LMBUFETB? H RTPGEUUE LPFPTPK MOSIX RPMHYUBEF YOZHPTNBGYA P ULPTPPUFSI RTPGEUUPTPCH Y UEFY:
BRPNOYFE TB Y OBCHUEZDB - tune NPTsOP CHSHCHRPMOSFSH FPMSHLP Ch single-mode. yOBYUE ChSCH MYVP RPMHYUFE OE UPCHUEN LPTTELFOSHCHK TEHMSHFBF, MYVP CHBYB NBYYOB NPCEF RTPUFP ЪBCHYUOHFSH.
yFBL, CHSHCHRPMOSEN tune. rPUME RETECHPDB PRETBGYPOOPK UYUFENSCH H single - mode OBRTYNET LPNBODPK init 1 YMYY init S BRHULBEN ULTYRF prep_tune,LPFPTSCHK RPDOYNEF SEFECHSE
YOFETJEKUSCH Y BRHUFYF MOSIX. PPUME BPFPP ON PDOPK Y NBYO Obbrhulben Tune, CCHPDEn Yeh opens DTHZPZP KHUMB DMSTPKKLY TSDUHMSHMSFBFB - Hfymifb DPMTSOB CHISDBFSH OBTPPU CHPD Oyufi Yuuem, RPMKHYUEOPI PF HSTNEMA LPBodhch Tune -a<ХЪЕМ>ABOUT DTHZPN HYME. uPVUFCHEOOP PRETBGYA RTYDEFUS RPCHFPTYFSH ABOUT DTHZPN HJME LPNBODPK tune -a<ХЪЕМ>. rPUME RPDPVOPZP FAOIOZB CH CHBYEK UYUFENE DPMTSEO RPSCHIFUS ZHBKM /etc/overheads, UPDETTSBEIK YOZHPTNBGYA DMS MOSIX CH CHYDE OELLYI YUYUMPCHSHCHI DBOOSCHI. h UMHYUBE, EUMY RP LBLYN-FP RTYUYOBN tune OE UNPZ UDEMBFSH EZP, RTPUFP ULPRYTHKFE Y FELHEESP LBFBMPZB ZhBKM mosix.cost H /etc/overheads. fp RPNPCEF ;-).
rTY FAOOYOSE LMBUFETB Y VPMEE YUEN DCHHI NBYYO OHTSOP YURPMSHЪPCHBFSH HFYMYFH, LPFPTBS FBLCE RPUFBCHMSEFUS U MOSIX - tune_kernel. dBOOBS HFYMYFB RPCHPMSEF
CHBN H VPMEE RTPUFPN Y RTCHSHCHUOPN CHYDE OBUFTPIFSH LMBUFET, PFCHEFYCH ABOUT OEULPMSHLP CHPRTPPUCH Y RTPCHEDS FAOIOZ U DCHNS NBYOBNY LMBUFETB.
LUFBFY, RP UPVUFCHEOOPNKH PRSHCHFH NPZH ULBBFSH, UFP RTY OBUFTPKLE LMBUFETB S TELPNEODHA CHBN OE ЪBZTHTSBFSH UEFSH, B OBPVPTPF - RTYPUFBOPCHYFSH CHUE BLFYCHOSCHE PRETBGYK CH MPBMSHOP.

5. hRTBCHMEOYE LMBUFETPN

dMS HRTBCHMEOYS HHMPN LMBUFETB UHEEUFCHHEF OEPPMSHYPK OBVPT LPNBOD, UTEDY LPFPTSCHI:

mosctl - LPOFTPMSH OBD HIMPN. rPCHPMSEF YЪNEOSFSH RBTBNEFTSHCH HMB - FBLIE, LBL block, stay, lstay, delay Y F.D
dBCHBKFE TBUUNPFTYN OEULPMSHLP RBTBNEFTCH LFPK HFYMYFSCH:
stay - RPCHPMSEF PUFBOBCHMYCHBFSH NYZTBGYA RTPGEUUPCH ABOUT DTHZYE HIMSCH U FELHEEK NBYOSCH. pFNEOSEFUS RBTBNEFTPN nostay YMY -stay
lstay - BRTEEBEF FPMSHLP MPLBMSHOSHCHN RTPGEUUBN NYZTBGYA, B RTPGEUUSCH U DTHZYI NBYYO NPZHF RTPDPMTSBFSH FFP DEMBFSH. pFNEOSEFUS RBTBNEFTPN nolstay YMYY -lstay.
block - BRTEEBEF HDBMEOOSHCHN / ZPUFECHSHCHN RTPGEUUBN CHSHCHRPMOSFUS ABOUT FFPN HIM. pFNEOSEFUS RBTBNEFTPN noblock YMY -block.
bring - CHPCHTBEBEF PVTBFOP CHUE RTPGEUUSCH U FELHEEZP HMB CHSHCHRPMOSENSCHE ABOUT DTHZYI NBYYOBI LMBUFETB. ffpf RBTBNEFT NPTSEF OE UTBVBFSHCHBFSH, RPLB NYZTYTPCHBCHYYK RTPGEUU OE RPMKHYUYF RTETSCHCHBOYE PF UYUFENSCH.
setdelay HUFBOBCHMYCHBEF CHTENS, RPUME LPFPTPZP RTPGEUU OBJOBEF NYZTYTPCHBFSH.
CHEDSH UZMBUYFEUSH - CH UMHYUBE, EUMY CHTENS CHSHCHRPMOEOIS RTPGEUUB NEOSHIE UELHODSCH UNSCHUM RETEOPUIFSH EZP ABOUT DTHZYE NBYYOSCH UEFY YUYUEBEF. yNEOOP FFP CHTENS Y CHCHUFBCHMSEFUS HFYMYFPK mosctl U RBTBNEFTPN setdecay. rTYNET:
mosctl setdecay 1 500 200
HUFBOBCHMYCHBEF CHTENS RETEIPDB ABOUT DTHZYE HHMSCH 500 NYMMYUELHOD CH UMHYUBE, EUMY RTPGEUU BRHEEO LBL slow Y 200 NYMYUELHOD DMS fast RTPGEUUPCH. pVTBFIFE CHOYNBOYE, UFP RBTBNEFT slow CHUEZDB DPMTSEO VSHFSH VPMSHIE YMI TBCHEO RBTBNEFTKh fast.

mosrun - BRHULBEF RTYMPSEOYE CH LMBUPET. OBRTYNET mosrun -e -j5 make JBRHUFYF make OB 5-PN XHME LMBUFETB, RTY LFPN CHUE EZP DPUETOIE RTPGEUUSCH VHDHF FBLCE CHSHCHRPMOSFUS OB 5-PN XME. rTBCHDB ЪDEUSH EUFSH PYO OABOU, RTY YUEN DPCHPMSHOP UHEEUFCHEOOOSCHK:
CH UMHYUBE, EUMY DPUETOIE RTPGEUUSCH CHSHCHRPMOSAFUS VSHCHUFTEE YUEN HUFBOPCMEOOOBS HFYMYFPK mosctl ЪBDETSLB (delay) FP RTPGEUU OE VHDEF NYZTYTPCHBFSH ABOUT DTHZYE HHMSCH LMBUFETB. X mosrun EEE DPCHPMSHOP NOPZP TBMYUOSCHI YOFETEUOSCHI RBTBNEFTCH, OP RPDTPVOP HOBFSH
P OYI CHSCH UNPTSFE Y THLPCHPDUFCHB RP LFPK HFIYMYFE. (manmosrun)

mon - LBL NShch HTSE OBEN, FP NPOYFPT LMBUFETB, LPFPTSCHK B RUECHDPZTBZHYYUEULPN CHYDE PFPVTBTSBEF BZTHLH LBTSDPZP TBVPYUEZP HMB CHBYEZP LMBUFETB, LPMYYUEUFCHP UCHPVPDOPK J BOSFPK RBNSFY HMPCH J CHSCHDBEF NOPZP DTHZPK, OE NEOEE YOFETEUOPK YOZHPTNBGYY.

mtop - NPDYZHYGYTPCHBOOBS DMS YURPMSH'CHBOYS OB HHMBI LMBUFETB CHETUYS LPNBODSCH top. pFPVTTBTSBEF ABOUT LLTBE DYOBNYUEULHA YOZHPTNBGYA P RTPGEUUBI, BRHEEOOSCHI ABOUT DBOOPN KHME, Y KHMBI, LHDB NYZTYTPCHBMY CHBY RTPGEUUSCH.

mps - FPCE NPDYZHYGYTPCHBOOBS CHETUYS LPNBODSCH ps. dPVBCHMEOP EEE PDOP RPME - OPNET HMB, ABOUT LPFPTSCHK NYZTYTPCHBM RTPGEUU.

CHPF ABOUT NPK CHZMSD Y CHUE PUOPCHOSHE HFIMYFSHCH. ABOUT UBNPN DEME LPOEIOP NPTsOP PVPKFYUSH DBTSE VOYI. OBRTYNET JURPMSHJHS DMS LPOFTPMS OBD LMBUFETPN /proc/mosix.
fBN LTPNE FPZP, YuFP NPTsOP OBKFY PUOPCHOKHA YOZHPTNBGYA P OBUFTPKLBI HMB, RTPGEUUBI BRHEEOOSCHI U DTHZYI HHMPCH Y F.D.,B FBLCE RPNEOSFSH YUBUFSH RBTBNEFTCH.

6. LURETENEOFYTHEN.

l UPTSBMEOYA, NOE OE HDBMPUSH BUFBCHYFSH CHSHCHRPMOSFUS LBLLPK-FP PYO RTPGEUU PDOCHTENEOOP ABOUT OEULPMSHLYI HMBBI. nBLUYNKHN, YuEZP S DPUFYZ H RTPGEUUE LLURETYNEOPCH U LMBUFETPN-YURPMSHЪPCHBOYE DMS CHSHCHRPMOEOIS TEUKHTUPENLYI RTPGEUUPCH ABOUT DTHZPN KHME.
dBCHBKFE TBUUNPFTYN PYO Y RTYNETCH:
dPRHUFYN, UFP X OBU H LMBUPETE TBVPFBAF DCHE NBYOSCH (DCHB HMB), PYO Y LPFPTSHI U OPNETPN 1 (366 Celeron), DTHZPK - U OPNETPN 5 (PIII450). LURETYNEOPHYTPCHBFSH NSCH VKHDEN ABOUT 5-PN HIM. 1-K HEM H FFP CHTENS RTPUFBYCHBM. ;-)
yFBL, BRHULBEN ON 5 H HME HFYMYFH crark LCA RPDVPTB RBTPMS A dv rar BTIYChH.eUMY LFP CHBU RTPVPCHBM TBVPFBFSH have RPDPVOSCHNY HFYMYFBNY, FP Software DPMTSEO OBFSH, YUFP RTPGEUU RPDVPTB RBTPMS "LHYBEF" DP 99 RTPGEOFPCH RTPGEUUPTB. oX UFP CE - RPUME BRHULB NSCH OBVMADBEN, UFP RTPGEUU PUFBEFUS ABOUT LFPN, 5-PN XHME. tBKHNOP - CHEDSH YNEOOP X FFPZP HMB RTPYCHPDYFEMSHOPUFSH RTECHSHCHYBEF 1-K HEM RPYUFY H DCHB TBB.
dBMEE NSC RTPUFP ЪBRHUFYMY UVPTLH kde 2.0. unNPFTYN FBVMYGHH RTPGEUUPCH Y CHYDYN, UFP crark HUREYOP NYZTYTPCHBM ABOUT 1-K HEM, PUCHPPVPDYCH RTPGEUUPT Y RBNSFSH (DB, DB - RBNSFSH FPYuOP FBLCE PUCHPPVPTSDBEFUS) DMS make. b LBL FPMSHLP make BLPOYUYM UCHPA TBVPPHH - crark CHETOHMUS PVTBFOP, ABOUT TPDOPC ENH 5-K HEM.
YOFETEWOSCHK JZHELF RPMHYUBEFUS, EUMY crark BRHUlbfsh ABOUT VPMEE NEDMEOOPN 1-N HYME.
fBN NSCH OBVMADBEN RTBLFYUEULY RTPFYCHPRMPTSOSCHK TEEKHMSHFBF - RTPGEUU UTBYH-TSE NYZTYTHEF OB 5-K, VPMEE VSHCHUFTSHCHK HEM. RTY LFPN PO CHPCHTBEBEFUS PVTBFOP, LPZDB IPSYO RSFPZP LPNRSHAFETTB OBYUYOBEF LBLIE-FP DECUFCHYS U UYUFENPK.

7. YURPMSH'CHBOYE

dBCHBKFE CH LPOGE TBVETENUS, BYUEN Y LBL NSCH NPTSEN YURPMSHЪCHBFSH LMBUFET CH UCHPEK RPCHUEDOECHOPC TSYOY.
LCA OBYUBMB OHTSOP TB J OBCHUEZDB BRPNOYFSH - LMBUFET CHSCHZPDEO FPMSHLP B FPN UMHYUBE, LPZDB B CHBYEK UEFY EUFSH OOPE LPMYYUEUFCHP NBYYO, LPFPTSCHE YUBUFEOSHLP RTPUFBYCHBAF J BL IPFYFE YURPMSHPCHBFSH YEE TEUHTUSCH OBRTYNET LCA UVPTLY KDE YMY LCA MAVSCHI UETSHEOSCHI RTPGEUUPCH. CHEDSH VMBZPDBTS LMBUFETH YЪ 10 NBYYO NPTsOP PDOCHTENEOOP
LPNRYMYTPCHBFSH DP 10 FTSEMSHCHI RTPZTBNN ABOUT FPN-CE C++. yMY RPDVYTBFSH LBLPK-FP RBTPMSh,
OE RTELTBEBS OY ABOUT UELHODH LFPZP RTPGEUUB OEBCHYUYNP PF OBZTHЪLY ABOUT CHBY LPNRSHAFET.
dB Y CHPPVEE - FFP RTPUFP YOFETEUOP ;-).

8. bblmayueoye

h BLMAYUEOYE IPYUKH ULBBFSH, YuFP Ch FFK UVBFSHOE OE TBUUNPFTEOSHCHUE ChPNPTSOPUFY MOSIX, F.L. S RTPUFP DP OYI EEE OE DPVTBMUS. eUMY DPVETHUSH - TsDYFE RTPPDPMTSEOIS. :-)

Today, the business processes of many companies are completely tied to information
technologies. With the growing dependence of organizations on the work of computing
networks, the availability of services at any time and under any load plays a big role
role. One computer can provide only an initial level of reliability and
scalability, the maximum level can be achieved by combining in
a single system of two or more computers - a cluster.

What is a cluster for?

Clusters are used in organizations that need round-the-clock and
uninterrupted availability of services and where any interruptions in operation are undesirable and
are not allowed. Or in cases where a surge in load is possible, with which
the main server cannot cope, then additional
hosts that usually perform other tasks. For a mail server that handles
tens and hundreds of thousands of letters per day, or a web server serving
online shopping, the use of clusters is highly desirable. For the user
such a system remains completely transparent - the entire group of computers will be
look like one server. The use of several, even cheaper,
computers allows you to get very significant advantages over a single
and fast server. This is a uniform distribution of incoming requests,
increased fault tolerance, since when one element exits, its load
pick up other systems, scalability, convenient maintenance and replacement
cluster nodes, and much more. Failure of one node automatically
is detected and the load is redistributed, for the client all this will remain
unnoticed.

Features of Win2k3

Generally speaking, some clusters are designed to increase data availability,
others are for maximum performance. In the context of the article, we
will be interested MPP (Massive Parallel Processing)- clusters,
which the same type of applications run on multiple computers, providing
service scalability. There are several technologies that
distribute load across multiple servers: traffic redirection,
address translation, DNS Round Robin, use of special
programs
operating at the application layer, like web accelerators. IN
Win2k3, unlike Win2k, clustering support is built in and
two types of clusters are supported, differing in applications and specifics
data:

1. NLB (Network Load Balancing) Clusters- provide
scalability and high availability of services and applications based on TCP protocols
and UDP, combining into one cluster up to 32 servers with the same data set, on
that run the same applications. Each request is executed as
separate transaction. They are used to work with sets of rarely changing
data such as WWW, ISA, terminal services and other similar services.

2. Server clusters– can combine up to eight nodes, their main
the task is to ensure the availability of applications in case of failure. Consist of active and
passive nodes. The passive node is idle most of the time, playing the role
reserve of the main node. For individual applications it is possible to customize
several active servers, distributing the load between them. Both nodes
connected to a single data warehouse. Server cluster is used for work
with large volumes of frequently changing data (mail, file and
SQL servers). Moreover, such a cluster cannot consist of nodes operating under
management of various variants of Win2k3: Enterprise or Datacenter ( web versions And
Standard server clusters do not support).

IN Microsoft Application Center 2000(and only) there was one more kind
cluster - CLB (Component Load Balancing), providing the opportunity
distributing COM+ applications across multiple servers.

NLB clusters

When using load balancing, each of the hosts creates
a virtual network adapter with its own independent IP and MAC address.
This virtual interface represents the cluster as a single node, clients
address to it on the virtual address. All requests are received by each
cluster node, but are processed by only one. Runs on all nodes
Network Load Balancing Service
,
which, using a special algorithm that does not require data exchange between
nodes, decides whether a particular node needs to process a request or
no. Nodes are exchanged heartbeat messages showing them
availability. If a host stops issuing a heartbeat or a new node appears,
other nodes start convergence process, anew
redistributing the load. Balancing can be implemented in one of two ways
modes:

1) unicast– unicast when instead of a physical MAC
the MAC of the cluster's virtual adapter is used. In this case, the cluster nodes are not
can communicate with each other using MAC addresses only over IP
(or a second adapter not associated with the cluster);

Only one of these modes should be used within the same cluster.

can be customized several NLB clusters on one network adapter,
specifying specific rules for ports. Such clusters are called virtual. Them
application allows you to set for each application, host or IP address
specific computers in the primary cluster, or block traffic for
some application without affecting the traffic for other programs running
on this node. Or, conversely, an NLB component can be bound to several
network adapters, which will allow you to configure a number of independent clusters on each
node. You should also be aware that configuring server clusters and NLB on the same node
is not possible because they work differently with network devices.

The administrator can make some kind of hybrid configuration that has
the advantages of both methods, for example, by creating an NLB cluster and setting up replication
data between nodes. But replication is not performed constantly, but from time to time,
therefore, the information on different nodes will be different for some time.

Let's finish with the theory, although you can talk about building clusters
for a long time, listing the possibilities and ways of building up, giving various
recommendations and options for specific implementation. Let's leave all these subtleties and nuances
for self-study and move on to the practical part.

Setting up an NLB cluster

For NLB cluster organizations no additional software required
produced by the available means of Win2k3. To create, maintain and monitor
NLB clusters use the component Network Load Balancing Manager
(Network Load Balancing Manager)
, which is in the tab
"Administration" "Control Panel" (command NLBMgr). Since the component
"Network load balancing" is set as standard network driver windows,
You can also install NLB using the Network Connections component, in
which the corresponding item is available. But it is better to use only the first
option, simultaneous activation of the NLB manager and "Network Connections"
may lead to unpredictable results.

NLB Manager allows you to configure and manage work from one place at once
multiple clusters and nodes.

It is also possible to install an NLB cluster on a computer with one network
adapter associated with the Network Load Balancing component, but this
In case of unicast mode, the NLB manager on this computer cannot be
used to control other nodes, and the nodes themselves cannot exchange
with each other information.

Now we call the NLB dispatcher. We don’t have clusters yet, so the appeared
the window does not contain any information. Select "New" from the "Cluster" menu and
we begin to fill in the fields in the "Cluster parameters" window. In the "Settings" field
Cluster IP settings" enter the value of the virtual IP address of the cluster, the mask
subnet and fully qualified name. The value of the virtual MAC address is set
automatically. A little lower we select the cluster operation mode: unicast or
multicast. Pay attention to the checkbox "Allow remote control" - in
all Microsoft documents strongly recommends not to use it in
avoid security issues. Instead, you should apply
dispatcher or other means remote control, for example toolkit
Windows Management (WMI). If the decision to use it is made, then
take all appropriate measures to protect the network, covering additional
firewall UDP ports 1717 and 2504.

After filling in all the fields, click "Next". In the "Cluster IP addresses" window, when
necessary, we add additional virtual IP addresses that will be
used by this cluster. In the next Port Rules window, you can
set load balancing for one or for a group of ports of all or
selected IP via UDP or TCP protocols, as well as block access to the cluster
certain ports (which a firewall does not replace). Default cluster
processes requests for all ports (0-65365); it is better to limit this list,
adding only those that are really necessary. Although, if there is no desire to mess around,
you can leave everything as it is. By the way, in Win2k, by default, all traffic
directed to the cluster, processed only the node that had the highest priority,
the remaining nodes were connected only when the main one failed.

For example, for IIS, you only need to enable ports 80 (http) and 443 (https).
Moreover, you can make it so that, for example, secure connections process
only specific servers that have the certificate installed. For adding
new rule, click "Add", in the dialog box that appears, enter
The IP address of the node, or if the rule applies to everyone, then leave the checkbox
"Everything". In the "From" and "To" fields of the port range, set the same value -
80. Key field is "Filtering Mode" - here
specifies who will process this request. There are three fields that define the mode
filtering: "Multiple nodes", "Single node" and "Disable this range of ports".
Selecting "Single Host" means that traffic directed to the selected IP (computer
or cluster) with the specified port number will be processed by the active node,
having the lowest priority indicator (more on that below). Selecting "Disable..."
means that such traffic will be dropped by all cluster members.

In the "Multiple nodes" filtering mode, you can additionally specify the option
client affinity definitions to direct traffic from a given client to
the same cluster node. There are three options: "None", "Single" or "Class
C". Choosing the first means that any request will be answered by an arbitrary
node. But you should not use it if UDP protocol is selected in the rule or
"Both". When choosing other items, the similarity of customers will be determined by
specific IP or class C network range.

So, for our rule with the 80th port, we will opt for the option
"Multiple Nodes - Class C". The rule for 443 is filled in the same way, but we use
"Single node", so that the client is always answered by the main node with the lowest
priority. If the dispatcher finds an incompatible rule, it will display
warning message, in addition to the Windows event log will be added
corresponding entry.

Next, we connect to the node of the future cluster by entering its name or real IP, and
define the interface that will be connected to the cluster network. In the Options window
node "select a priority from the list, specify the network settings, set the initial
node status (running, stopped, paused). Priority at the same time
is the node's unique identifier; the lower the number, the higher the priority.
The node with priority 1 is the master server that first receives
packets and acting as a routing manager.

The checkbox "Keep state after restarting the computer" allows, in case
failure or reboot of that node will automatically bring it online. After pressing
to "Finish" in the Manager window, an entry will appear about the new cluster, in which so far
there is one node.
The next node is also easy to add. Select "Add Node" from the menu or
"Connect to existing", depending on which computer
a connection is made (it is already a member of the cluster or not). Then in the window
specify the name or address of the computer, if there are enough rights to connect, a new one
the node will be connected to the cluster. The first time the icon next to his name will be
differ, but when the convergence process is completed, it will be the same as that of
first computer.

Since the dispatcher displays the properties of the nodes at the time of its connection, for
clarification of the current state, select the cluster and in the context menu select
"Update". The manager will connect to the cluster and display the updated data.

After installation NLB cluster don't forget to change the DNS record to
name resolution now showed on the cluster IP.

Change server load

In this configuration, all servers will be loaded evenly (with the exception of
option "One node"). In some cases it is necessary to redistribute the load,
most of the work is assigned to one of the nodes (for example, the most powerful).
For a cluster, rules can be modified after they have been created by selecting
context menu that appears when you click on the name, the item "Cluster Properties".
All the settings that we talked about above are available here. Menu item
"Node Properties" provides a few more options. In "Node Settings"
you can change the priority value for a particular node. In "Rules
for ports" you cannot add or delete a rule, it is available only at the level
cluster. But, by choosing to edit a specific rule, we get the opportunity
adjust some settings. So, with the filtering mode set
“Multiple Nodes”, the “Load Estimation” item becomes available, allowing you to
redistribute the load on a specific node. Checked by default
“Equal”, but in the “Load Estimation” you can specify a different value of the load on
specific node, as a percentage of the total cluster load. If the mode is activated
filtering "One node", a new parameter "Priority" appears in this window.
processing". Using it, you can make it so that traffic to a specific port
will be processed first of all by one node of the cluster, and to another - by others
node.

Event logging

As already mentioned, the Network Load Balancing component records everything
cluster actions and changes in the Windows Event Log. To see them
select "Event Viewer - System", NLB includes WLBS messages (from
Windows Load Balancing Service, as this service was called in NT). Besides, in
the dispatcher window displays the latest messages containing information about errors
and any configuration changes. By default, this information is not
is saved. To write it to a file, select "Options -\u003e
Log Options", select the "Enable logging" checkbox and specify a name
file. The new file will be created in your account's Documents subdirectory
and settings.

Setting up IIS with replication

A cluster is a cluster, but without a service it makes no sense. So let's add IIS (Internet
information services)
. The IIS server is included with Win2k3, but to reduce it to
to minimize the possibility of attacks on the server, it is not installed by default.

There are two ways to install IIS: through the "Control Panel" or
role management wizard for this server. Let's consider the first one. Go to
"Control Panel - Add or Remove Programs" (Control Panel - Add or
Remove Programs), select "Install Windows Components" (Add/Remove Windows
components). Now go to the item "Application Server" and mark in the "Services
IIS" is all that is needed. By default, the server's working directory is \Inetpub\wwwroot.
Once installed, IIS can display static documents.

First of all, decide what components and resources will be required. You will need one master node, at least a dozen identical compute nodes, an Ethernet switch, a power distribution unit, and a rack. Determine the amount of wiring and cooling, as well as the amount of space you need. Also decide what IP addresses you want to use for nodes, what software you will install and what technologies will be required to create parallel computing power (more on this below).

  • Although the hardware is expensive, all of the software in this article is free, and most of it is open source.
  • If you want to know how fast your supercomputer could theoretically be, use this tool:

Mount the nodes. You will need to build hosts or purchase pre-built servers.

  • Choose server frames that make the most efficient use of space and energy, as well as efficient cooling.
  • Or you can “recycle” a dozen or so used servers, a few outdated ones - and even if their weight exceeds the total weight of components, you will save a decent amount. All processors, network adapters, and motherboards must be the same for computers to work well together. Of course, don't forget about RAM and hard drives for each node, as well as at least one optical drive for the main node.
  • Install the servers in the rack. Start at the bottom so the rack isn't overloaded at the top. You will need a friend's help - assembled servers can be very heavy, and it is quite difficult to put them in the cells on which they are supported in the rack.

    Install an Ethernet switch next to the rack. It's worth configuring the switch right away: set the jumbo frame size to 9000 bytes, set the static IP address you chose in step 1, and turn off unnecessary protocols such as SMTP.

    Install a power distribution unit (PDU, or Power Distribution Unit). Depending on the maximum load the nodes on your network are putting out, you may need 220 volts for a high performance computer.

  • When everything is set, proceed to the configuration. Linux is in fact the go-to system for high-performance (HPC) clusters - not only is it ideal for scientific computing, but you also don't have to pay to install a system on hundreds or even thousands of nodes. Imagine how much it would cost to install Windows on all nodes!

    • Start by installing the latest BIOS for motherboard and software from the manufacturer, which must be the same for all servers.
    • Install your preferred Linux distribution on all nodes and the GUI distribution on the master node. Popular systems: CentOS, OpenSuse, Scientific Linux, RedHat and SLES.
    • The author highly recommends using Rocks Cluster Distribution. In addition to installing all the necessary software and tools for the cluster, Rocks provides an excellent method for quickly "porting" multiple copies of the system to similar servers using PXE boot and Red Hat's "Kick Start" procedure.
  • Install the message passing interface, resource manager, and other required libraries. If you didn't install Rocks in the previous step, you'll have to manually install the required software to set up the parallel computing logic.

    • To get started, you'll need a portable bash system, such as Torque Resource Manager, which allows you to split and distribute tasks across multiple machines.
    • Add Maui Cluster Scheduler to Torque to complete the installation.
    • Next, you need to set up a message passing interface, which is necessary for the individual processes in each individual node to share data. OpenMP is the easiest option.
    • Don't forget about multi-threaded math libraries and compilers that will "assemble" your programs for distributed computing. Did I already say that you should just install Rocks?
  • Connect computers to the network. The master node sends tasks for calculation to slave nodes, which in turn must return the result back, and also send messages to each other. And the sooner this happens, the better.

    • Use private Ethernet network to connect all nodes into a cluster.
    • The master node can also act as an NFS, PXE, DHCP, TFTP and NTP server when connected to Ethernet.
    • You must separate this network from the public network to ensure that packets are not overlapped by others on the LAN.
  • Test the cluster. The last thing you should do before giving users access to computing power is performance testing. HPL (High Performance Lynpack) benchmark is a popular option for measuring the speed of computing in a cluster. You need to compile software from source with the highest degree of optimization that your compiler allows for the architecture you have chosen.

    • You must, of course, compile with all possible optimization settings that are available for the platform you have chosen. For example, if using an AMD CPU, compile to Open64 with an optimization level of -0.
    • Compare your results with TOP500.org to compare your cluster with the 500 fastest supercomputers in the world!
  • Introduction

    A server cluster is a group of independent servers managed by the cluster service that work together as a single system. Server clusters are created by bringing multiple Windows® 2000 Advanced Server and Windows 2000 Datacenter Server-based servers together to provide high availability, scalability, and manageability for resources and applications.

    The task of a server cluster is to provide continuous user access to applications and resources in cases of hardware or software failures or planned equipment shutdowns. If one of the cluster servers becomes unavailable due to a failure or shutdown for maintenance, information resources and applications are redistributed among the remaining available cluster nodes.

    For cluster systems, the use of the term " high availability" is preferred over using the term " fault tolerance" because fault tolerance technologies require a higher level of hardware resilience and recovery mechanisms. As a rule, fault-tolerant servers use a high degree of hardware redundancy, plus in addition to this, specialized software that allows almost immediately recovery in the event of any single software or hardware failure. These solutions are significantly more expensive than using cluster technologies, as organizations are forced to overpay for additional hardware that is idle most of the time and is used only in case of failures. Fault-tolerant servers are used for high-value transaction intensive applications such as payment processors, ATMs or stock exchanges.

    Although the Cluster service is not guaranteed to run non-stop, it provides a high level of availability that is sufficient to run most mission-critical applications. The Cluster service can monitor the operation of applications and resources, automatically recognizing the state of failures and recovering the system after they are resolved. This provides more flexible workload management within the cluster, and improves overall system availability.

    Key benefits of using the Cluster service:

    • High availability. In the event of a node failure, the cluster service transfers control of resources, such as hard disks and network addresses, to the active cluster node. When a software or hardware failure occurs, the cluster software restarts the failed application on the live node, or shifts the entire load of the failed node to the remaining live nodes. In this case, users may notice only a short delay in service.
    • Return after cancellation. The Cluster service automatically redistributes the workload across the cluster when a failed node becomes available again.
    • Controllability. Cluster Administrator is a snap-in that you can use to manage the cluster as a single system, as well as to manage applications. The cluster administrator provides a transparent view of how applications are running as if they were running on the same server. You can move applications to different servers within a cluster by dragging and dropping cluster objects. In the same way, you can move data. This method can be used to manually distribute the server workload, as well as to offload the server and then stop it for scheduled maintenance. In addition, the Cluster Administrator allows you to remotely monitor the state of the cluster, all its nodes and resources.
    • Scalability. To ensure that cluster performance can always keep up with growing demands, the Cluster service is designed to scale. If the overall performance of the cluster becomes insufficient to handle the load generated by the clustered applications, additional nodes can be added to the cluster.

    This document provides instructions for installing the Cluster service on servers running Windows control 2000 Advanced Server and Windows 2000 Datacenter Server, and describes how to install the Cluster service on cluster node servers. This guide does not cover installing and configuring clustered applications, but only walks you through the installation process of a simple two-node cluster.

    System requirements for creating a server cluster

    The following checklists will help you prepare for installation. Step by step instructions installation instructions will be presented further after these lists.

    Software requirements

    • Operating system Microsoft Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed on all servers in the cluster.
    • An installed name resolution service such as Domain Naming System (DNS), Windows Internet Naming System (WINS), HOSTS, etc.
    • Terminal server for remote cluster administration. This requirement is not mandatory, but is recommended only to ensure the convenience of cluster management.

    Hardware Requirements

    • The hardware requirements for a cluster node are the same as those for installing the Windows 2000 Advanced Server or Windows 2000 Datacenter Server operating systems. These requirements can be found on the search page Microsoft directory.
    • The cluster hardware must be certified and listed on the Microsoft Cluster Service Hardware Compatibility List (HCL). latest version this list can be found on the search page Windows 2000 Hardware Compatibility List Microsoft directory by selecting the "Cluster" search category.

    Two HCL-qualified computers, each with:

    • HDD with a bootable system partition and Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed. This drive must not be connected to the shared storage bus discussed below.
    • Separate PCI-controller devices optical channel (Fiber Channel) or SCSI for connecting an external shared storage device. This controller must be present in addition to the controller boot disk.
    • Two PCI network adapters installed on each computer in the cluster.
    • The external disk storage device listed in the HCL that is attached to all nodes in the cluster. It will act as a cluster disk. A configuration using hardware RAID arrays is recommended.
    • Cables for connecting a shared storage device to all computers. Refer to the manufacturer's documentation for instructions on configuring storage devices. If you are connecting to a SCSI bus, you can refer to Appendix A for more information.
    • All hardware on the cluster computers must be completely identical. This will simplify the configuration process and save you from potential compatibility issues.

    Network Configuration Requirements

    • Unique NetBIOS name for the cluster.
    • Five unique static IP addresses: two for private network adapters, two for public network adapters, and one for the cluster.
    • Domain Account for the cluster service (all cluster nodes must be members of the same domain)
    • Each node must have two network adapters - one for connecting to the public network, one for intra-cluster communication of nodes. A configuration using a single network adapter to connect to a public and private network at the same time is not supported. A separate network adapter for the private network is required to comply with HCL requirements.

    Requirements for shared storage drives

    • All shared storage drives, including the quorum drive, must be physically connected to the shared bus.
    • All disks connected to the shared bus must be available to each node. This can be verified during the installation and configuration phase of the host adapter. For detailed instructions refer to the adapter manufacturer's documentation.
    • SCSI devices target unique SCSI IDs must be assigned, and the terminators on the SCSI bus must be correctly set, according to the manufacturer's instructions. one
    • All shared storage disks must be configured as basic disks (not dynamic)
    • All partitions on shared storage drives must be formatted with the NTFS file system.

    It is highly recommended that all shared storage drives be configured into hardware RAID arrays. Although not required, creating fault-tolerant RAID configurations is key to protecting against disk failures.

    Installing a cluster

    General overview of the installation

    During the installation process, some nodes will be shut down and some will be rebooted. This is necessary in order to ensure the integrity of data located on disks connected to the common bus of an external storage device. Data corruption can occur when multiple nodes simultaneously attempt to write to the same drive that is not protected by the cluster software.

    Table 1 will help you determine which nodes and storage devices must be enabled for each step of the installation.

    This guide describes how to create a two-node cluster. However, if you are setting up a cluster with more than two nodes, you can use the column value "Node 2" to determine the state of the remaining nodes.

    Table 1. Sequence of enabling devices during cluster installation

    Step Node 1 Node 2 storage device A comment
    Setting network parameters On On Off Make sure all storage devices connected to the shared bus are turned off. Turn on all nodes.
    Setting up shared drives On Off On Turn off all nodes. Power on the shared storage device, then power on the first host.
    Checking the configuration of shared drives Off On On Turn off the first node, turn on the second node. Repeat for knots 3 and 4 if necessary.
    Configuring the first node On Off On Turn off all nodes; turn on the first node.
    Configuring the Second Node On On On After successfully configuring the first node, power on the second node. Repeat for knots 3 and 4 if necessary.
    Completing the installation On On On At this point, all nodes should be enabled.

    Before installation software clusters, follow these steps:

    • Install on each computer in the cluster operating system Windows 2000 Advanced Server or Windows 2000 Datacenter Server.
    • Configure network settings.
    • Set up shared storage drives.

    Complete these steps on each node of the cluster before installing the Cluster service on the first node.

    To configure the Cluster service on a server running Windows 2000, your account must have administrative rights on each node. All cluster nodes must be either member servers or controllers of the same domain at the same time. Mixed use of member servers and domain controllers in a cluster is not allowed.

    Installing the Windows 2000 operating system

    For Windows installation 2000 on each cluster node, refer to the documentation that you received with your operating system.

    This document uses the naming structure from the manual "Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment". However, you can use any names.

    You must be logged in with an administrator account before starting the installation of the cluster service.

    Configuring network settings

    Note: At this point in the installation, turn off all shared storage devices, and then turn on all nodes. You must prevent multiple nodes from accessing shared storage at the same time until the Cluster service is installed on at least one of the nodes and that node is powered on.

    Each node must have at least two network adapters installed - one to connect to the public network and one to connect to the private network of the cluster nodes.

    The private network network adapter provides communication between nodes, communication of the current state of the cluster, and management of the cluster. Each node's public network adapter connects the cluster to the public network of client computers.

    Make sure all network adapters are physically connected correctly: private network adapters are only connected to other private network adapters, and public network adapters are connected to public network switches. The connection diagram is shown in Figure 1. Perform this check on each node of the cluster before proceeding to configure the shared storage drives.

    Figure 1: An example of a two-node cluster

    Configuring a private network adapter

    Perform these steps on the first node of your cluster.

    1. My network environment and select command Properties.
    2. Right click on the icon.

    Note: Which network adapter will serve the private network and which one the public network depends on the physical connection of the network cables. IN this document we will assume that the first adapter (Connection via local network) is connected to the public network, and the second adapter (Local Area Connection 2) is connected to the cluster's private network. In your case, this may not be the case.

    1. State. Window Status Local Area Connection 2 shows the connection status and its speed. If the connection is in a disconnected state, check the cables and the correct connection. Fix the issue before continuing. Click the button close.
    2. Right click on the icon again LAN connection 2, select a command Properties and press the button Tune.
    3. Select tab Additionally. The window shown in Figure 2 will appear.
    4. For private network network adapters, the speed must be set manually instead of the default value. Specify the speed of your network in the drop-down list. Don't use values "Auto Sense" or "Auto Select" to select the speed, since some network adapters may drop packets during the determination of the connection speed. To set the speed of the network adapter, specify the actual value for the parameter Connection type or Speed.

    Figure 2: Network adapter advanced settings

    All network adapters in a cluster connected to the same network must be configured in the same way and use the same parameter values duplex mode, flow control, Connection type, etc. Even if different nodes use different network equipment, the values ​​of these parameters must be the same.

    1. Select Internet Protocol (TCP/IP) in the list of components used by the connection.
    2. Click the button Properties.
    3. Set the switch to Use the following IP address and enter the address 10.1.1.1 . (For the second node, use the address 10.1.1.2 ).
    4. Set the subnet mask: 255.0.0.0 .
    5. Click the button Additionally and select the tab WINS. Set the switch value to position Disable NetBIOS over TCP/IP. Click OK to return to the previous menu. Follow this step only for the private network adapter.

    Your dialog box should look like Figure 3.

    Figure 3: Private network connection IP address

    Configuring a public network adapter

    Note: If a DHCP server is running on a public network, an IP address for the public network adapter may be assigned automatically. However, this method is not recommended for cluster node adapters. We strongly recommend that you assign permanent IP addresses to all public and private host NICs. Otherwise, if the DHCP server fails, access to the cluster nodes may not be possible. If you are forced to use DHCP for public network adapters, use long address leases to ensure that the dynamically assigned address remains valid even if the DHCP server becomes temporarily unavailable. Always assign permanent IP addresses to private network adapters. Keep in mind that the Cluster service can only recognize one network interface per subnet. If you need help with assigning network addresses in Windows 2000, see the operating system's built-in help.

    Renaming network connections

    For clarity, we recommend that you change the names of your network connections. For example, you can change the name of the connection LAN connection 2 on the . This method will help you identify networks more easily and assign their roles correctly.

    1. Right click on the icon 2.
    2. In the context menu, select the command Rename.
    3. Enter Connecting to a private cluster network in the text field and press the key ENTER.
    4. Repeat steps 1-3 and change the name of the connection LAN connection on the Connection to a public network.

    Figure 4: Renamed network connections

    1. The renamed network connections should look like Figure 4. Close the window Network and Dial-Up Networking. New network connection names are automatically replicated to other cluster nodes when they are powered up.

    Checking Network Connections and Name Resolutions

    To verify that the configured network hardware is working, complete the following steps for all network adapters in each host. To do this, you must know the IP addresses of all network adapters in the cluster. You can get this information by running the command ipconfig on each node:

    1. Click the button Start, select a team Run and type the command cmd in a text window. Click OK.
    2. Dial a team ipconfig /all and press the key ENTER. You will see information about the IP protocol setting for each network adapter on the local machine.
    3. In case you don't have a window open yet command line, follow step 1.
    4. Dial a team ping ipaddress where ipaddress is the IP address of the corresponding network adapter on the other host. Assume for example that the network adapters have the following IP addresses:
    Node number Name network connection Network adapter IP address
    1 Public network connection 172.16.12.12
    1 Connecting to a private cluster network 10.1.1.1
    2 Public network connection 172.16.12.14
    2 Connecting to a private cluster network 10.1.1.2

    In this example, you need to run the commands ping 172.16.12.14 And ping 10.1.1.2 from node 1, and execute commands ping 172.16.12.12 And ping 10.1.1.1 from node 2.

    To check name resolution, run the command ping, using the computer name as the argument instead of its IP address. For example, to check name resolution for the first cluster node named hq-res-dc01, run the command ping hq-res-dc01 from any client computer.

    Domain membership check

    All nodes in the cluster must be members of the same domain and must be able to network with the domain controller and DNS server. The nodes can be configured as domain member servers or as controllers of the same domain. If you decide to make one of the nodes a domain controller, then all other nodes in the cluster must also be configured as domain controllers of the same domain. This guide assumes that all nodes are domain controllers.

    Note: For links to additional documentation on configuring domains, DNS, and DHCP services in Windows 2000, see Related Resources at the end of this document.

    1. Right click My computer and select command Properties.
    2. Select tab Network identification. In the dialog box Properties of the system You will see the full computer and domain name. In our example, the domain is called reskit.com.
    3. If you have configured the node as a member server, then you can join it to the domain at this point. Click the button Properties and follow the instructions to join the computer to the domain.
    4. close the windows Properties of the system And My computer.

    Create a Cluster Service Account

    For the cluster service, you must create a separate domain account under which it will run. The installer will require you to enter credentials for the Cluster service, so an account must be created before the service can be installed. The account must not be owned by any domain user, and must be used exclusively for running the Cluster service.

    1. Click the button Start, select a command Programs / Administration, start snap .
    2. Expand Category reskit.com if it is not already deployed
    3. Select from the list Users.
    4. Right click on Users, select from the context menu Create, select User.
    5. Enter a name for the cluster service account, as shown in Figure 5, and click Further.

    Figure 5: Adding a Cluster User

    1. Check the boxes Prevent user from changing password And Password does not expire. Click the button Further and button Ready to create a user.

    Note: If your administrative security policy does not allow the use of passwords that never expire, you will need to update the password and configure the Cluster service on each node before it expires.

    1. Right click on the user Cluster in the right toolbar Active Directory Users and Computers.
    2. In the context menu, select the command Add members to a group.
    3. Choose a group Administrators and press OK. The new account now has administrator privileges on local computer.
    4. close snap Active Directory Users and Computers.

    Configuring Shared Storage Drives

    A warning: Ensure that at least one of the cluster nodes is running Windows 2000 Advanced Server or Windows 2000 Datacenter Server, and that the Cluster service is configured and running. Only then can the operating system be loaded. Windows system 2000 on other nodes. If these conditions are not met, the cluster disks may be damaged.

    To start configuring shared storage drives, turn off all nodes. After that, turn on the shared storage device, then turn on node 1.

    Quorum Disk

    The quorum disk is used to store the checkpoints and restore log files of the cluster database, providing cluster management. We make the following recommendations for creating a quorum disk:

    • Create a small partition (at least 50MB in size) to use as the quorum disk. We generally recommend creating a 500 MB quorum disk.
    • Allocate a separate disk for the quorum resource. Since the entire cluster will fail if the quorum disk fails, we strongly recommend using a hardware RAID array.

    During the installation of the Cluster service, you will need to assign a drive letter to the quorum. In our example, we will use the letter Q.

    Configuring Shared Storage Drives

    1. Right click My computer, select a command Control. Expand the category in the window that opens. storage devices.
    2. Choose a team Disk Management.
    3. Make sure all shared storage drives are formatted with NTFS and have the status Basic. If you connect a new drive, it will automatically start Disk Signing and Update Wizard. When the wizard starts, click the button Refresh, to continue its work, after that the drive will be defined as Dynamic. To convert a disk to basic, right-click on Disk #(where # - the number of the disk you are working with) and select the command Revert to base disk.

    Right click area not allocated next to the corresponding disk.

    1. Choose a team Create section
    2. will start Partition Wizard. Double click the button Further.
    3. Enter the desired partition size in megabytes and click the button Further.
    4. Click the button Further, accepting the default drive letter
    5. Click the button Further to format and create a partition.

    Assign drive letters

    After the data bus, disks, and shared storage partitions are configured, you must assign drive letters to all partitions on all disks in the cluster.

    Note: Mount points are a feature of the file system that allows you to set file system, using existing directories, without assigning a drive letter. Mount points are not supported by clusters. Any external drive used as a cluster resource must be partitioned into NTFS partitions, and these partitions must be assigned drive letters.

    1. Right-click the desired partition and select command Change Drive Letter and Drive Path.
    2. Choose a new drive letter.
    3. Repeat steps 1 and 2 for all shared storage drives.

    Figure 6: Drive partitions with assigned letters

    1. At the end of the procedure, the snap window Computer management should look like Figure 6. Close the snap Computer management.
    1. Click the button Start, select Programs / Standard, and run the program Notebook".
    2. Type a few words and save the file with a name test.txt by choosing the command Save as from the menu File. close Notebook.
    3. Double click on the icon My documents.
    4. Right click on file test.txt and in the context menu select the command Copy.
    5. Close the window.
    6. Open My computer.
    7. Double-click on the disk partition of the shared storage device.
    8. Right click and select command Insert.
    9. A copy of the file should appear on the shared storage drive test.txt.
    10. Double click on the file test.txt to open it from a shared storage drive. Close the file.
    11. Highlight the file and press the key Del to remove the file from the cluster disk.

    Repeat the procedure for all disks in the cluster to ensure they are accessible from the first node.

    Now turn off the first node, turn on the second node and repeat the steps of the section Checking the operation and sharing of disks. Perform the same steps on all additional nodes. Once you have verified that all nodes can read and write information to the shared storage disks, turn off all but the first node and proceed to the next section.

    Share with friends or save for yourself:

    Loading...