HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 3 posts ] 
Author Message
 Post subject: 3PAR OOTB Failure
PostPosted: Wed May 20, 2020 8:23 am 

Joined: Wed May 20, 2020 5:18 am
Posts: 1
Hello Mates,

i am new with 3PAR Storage. I've the 3PAR by an refurbished reseller with
4 x 200 GB SSD
68 x 450 GB HDD 10k

With 1 Controller Node (2 Nodes) 3PAR 7400 and 2 Disk Shelfes 2,5"

The software of the 3PAR OS is still 3.1.2 and i dont get a HPE SAID. (its to expensive private)
I've found alle the files needed in the internet to get newer version.

But at the OOTB Procedure i get the following Error.
Quote:
3PAR Console Menu 1638088-1 3.1.2.422

1. Out Of The Box Procedure
2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Exit
> 1


It appears your Cluster is in a proper manual startup state to proceed.

Welcome to the Out-Of-The-Box Experience 3.1.2.422

*****************************************************************************

*****************************************************************************
* *
* CAUTION!! CONTINUING WILL CAUSE COMPLETE AND IRRECOVERABLE DATA LOSS *
* *
*****************************************************************************
*****************************************************************************

You need to have the InServ network config information available.
This can be obtained from the Systems Assurance Document.


DO YOU WISH TO CONTINUE? yes/no ==> yes


Cluster has the following nodes:

Node 1


Enter < C > to continue or < Q > to quit ==> c


Please identify a location so that time zone rules can be set correctly.
Please select a continent or ocean.
1) Africa
2) Americas
3) Antarctica
4) Arctic Ocean
5) Asia
6) Atlantic Ocean
7) Australia
8) Europe
9) Indian Ocean
10) Pacific Ocean
11) none - I want to specify the time zone using the Posix TZ format.
#? 8
Please select a country.
1) Aaland Islands 18) Greece 35) Norway
2) Albania 19) Guernsey 36) Poland
3) Andorra 20) Hungary 37) Portugal
4) Austria 21) Ireland 38) Romania
5) Belarus 22) Isle of Man 39) Russia
6) Belgium 23) Italy 40) San Marino
7) Bosnia & Herzegovina 24) Jersey 41) Serbia
8) Britain (UK) 25) Latvia 42) Slovakia
9) Bulgaria 26) Liechtenstein 43) Slovenia
10) Croatia 27) Lithuania 44) Spain
11) Czech Republic 28) Luxembourg 45) Sweden
12) Denmark 29) Macedonia 46) Switzerland
13) Estonia 30) Malta 47) Turkey
14) Finland 31) Moldova 48) Ukraine
15) France 32) Monaco 49) Vatican City
16) Germany 33) Montenegro
17) Gibraltar 34) Netherlands
#? 16

The following information has been given:

Germany

Therefore TZ='Europe/Berlin' will be used.
Local time is now: Wed May 20 15:15:59 CEST 2020.
Universal Time is now: Wed May 20 13:15:59 UTC 2020.
Is the above information OK?
1) Yes
2) No
#? 1

You can make this change permanent for yourself by appending the line
TZ='Europe/Berlin'; export TZ
to the file '.profile' in your home directory; then log out and log in again.

Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:

Updating all nodes to use timezone Europe/Berlin...
Timezone set successfully.


Setting TOD on all nodes.

Current date according to the system: Wed May 20 15:16:01 CEST 2020

Enter dates in MMDDhhmmYYYY format. For example, 031822572002 would
be March 18, 2002 10:57 PM.

Enter the correct date and time, or just press enter to accept
the date shown above. ==>




Enter the InServ system name ==> 3par

Cluster will be initialized with the name < 3par >


IS THIS THE CORRECT NAME? yes/change ==> yes


Cluster is being initialized with the name < 3par > ...Please Wait...


Please verify your InForm OS versions are correct.

Release version 3.1.2.422 (MU2)
Patches: P10

Component Name Version
CLI Server 3.1.2.422 (MU2)
CLI Client 3.1.2.422 (MU2)
System Manager 3.1.2.422 (MU2)
Kernel 3.1.2.422 (MU2)
TPD Kernel Code 3.1.2.422 (MU2)


Enter < C > to continue or < Q > to quit ==> c

Examining the port states...
All ports are in acceptable states.

Examining state of new disks...

Found < 67 > HCBRE0450GBAS10K disks
Found < 4 > HRALP0200GBASSLC disks
Found < 1 > SLTN0450S5xnN010 disks

Cluster has < 72 > total disks in < 72 > magazines.
< 72 > are degraded.

Now would be the time to fix any disk problems.


Enter < C > to continue or < Q > to quit ==> c



Ensuring all ports are properly connected before continuing... Please Wait...


Found one or more discrepancies when examining cage connections:

Component --------Description-------- Qty
Cabling Missing I/O module 3
Cabling Cable chains are unbalanced 2

-Identifier- -- ----------------------------Description----------------------------
cage0 -- I/O unknown missing. Check status and cabling to cage0 I/O unknown
cage0 -- node0 DP-1 has 0 cages, node1 DP-1 has 2 cages
cage0 -- node0 DP-2 has 0 cages, node1 DP-2 has 1 cages
cage1 -- I/O 0 missing. Check status and cabling to cage1 I/O 0
cage2 -- I/O 0 missing. Check status and cabling to cage2 I/O 0


Please verify whether the output above is as expected.


Enter < C > to continue, < R > to retry, or < Q > to quit ==> c


Examining drive cage firmware... Please wait a moment...
3 cages require firmware upgrades.
This will require approximately 12 minutes.

Enter y to proceed with this action, or n to skip it.
y

2020-05-20 15:16:42 CEST
Skipping cage cage0 cpuA already up to date at rev 320e
Cage cage0 cpuB is invalid.
Skipping upgrade for cage cage0 cpuB.
Waiting for cage0 to finish coming back...
2020-05-20 15:16:42 CEST
Skipping cage cage1 cpuA already up to date at rev 320e
Cage cage1 cpuB is invalid.
Skipping upgrade for cage cage1 cpuB.
Waiting for cage1 to finish coming back...
2020-05-20 15:16:42 CEST
Skipping cage cage2 cpuA already up to date at rev 320e
Cage cage2 cpuB is invalid.
Skipping upgrade for cage cage2 cpuB.
Waiting for cage2 to finish coming back...


All disks have current firmware.

Issuing admitpd... Please wait a moment...
admitpd completed with the following results...

Found < 67 > HCBRE0450GBAS10K disks
Found < 4 > HRALP0200GBASSLC disks
Found < 1 > SLTN0450S5xnN010 disks

Cluster has < 72 > total disks in < 72 > magazines.
< 72 > are degraded.

Not all of the disks are in a valid state.


Enter < C > to continue or < Q > to quit ==> c



At this point, it is recommended that the OOTB stress test be started. This
will run heavy I/O on the PDs for 1 hour following 1 hour of chunklet
initialization. The stress test will stop in approximately 2 hours and 15
minutes. Chunklet initialization may continue even after the stress test
completes. Failures will show up as slow disk events.


Do you want to start the test (y/n)? ==> y
Starting system stress test...
Unable to proceed because the system is not connected to any disks.

Admin volume could not be properly created!

Exiting Out-Of-The-Box Experience...


3PAR Console Menu 1638088-1 3.1.2.422

1. Out Of The Box Procedure
2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Exit
>


Quote:
Checking alert
Checking ao
Checking cabling
Checking cage
Checking dar
Checking date
Checking file
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking pdch
Checking port
Checking qos
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
Checking sp
Component ----------------------------Description----------------------------- Qty
Alert New alerts 8
Cabling Missing I/O module 3
Cabling Cable chains are unbalanced 2
Cage Cages not on current firmware 3
Cage Cages missing B loop 3
File sr_mnt not mounted 1
File Behavior altering files 1
File Admin Volume is not mounted 1
LD Preserved data storage is not enabled 1
LD Number of logging LDs does not match number of nodes in the cluster 1
LD Preserved data storage space does not equal total node's Data memory 1
License No license has been entered. 1
Network Too few working admin network connections 1
Node Nodes that are not online 1
PD PD count exceeds licensed quantity 1
PD Cages with unbalanced disks 4
PD PDs that are degraded 72
Port Ports with mismatched mode and type 1
QoS Unable to check QoS 1

Component ----------------------Identifier---------------------- -------------------------------------------------------------------------------------Description--------------------------------------------------------------------------------------
Alert sw_pr:0 The PR is currently getting data from the internal drive on node 1, not the admin volume. Previously recorded alerts will not be visible until the PR transitions to the admin volume.
Alert hw_node:0,hw_subsys_dev:IDE_Drive,hw_subsys_instance:0 Node 0, SubSys Device IDE_Drive, SubSys Instance 0 Failed (Node IDE Drive Failure)
Alert sw_sysmgr The PR is not available on the admin volume. The system was unable to save status data for 1 tasks.
Alert sw_system System write cache availability is degraded.
Alert sw_pdata Preserved data LDs have not been configured.
Alert hw_cage:0 Cage 0 Degraded (Loop Offline)
Alert hw_cage:1 Cage 1 Degraded (Loop Offline)
Alert hw_cage:2 Cage 2 Degraded (Loop Offline)
Cabling cage0 I/O unknown missing. Check status and cabling to cage0 I/O unknown
Cabling cage0 node0 DP-1 has 0 cages, node1 DP-1 has 2 cages
Cabling cage0 node0 DP-2 has 0 cages, node1 DP-2 has 1 cages
Cabling cage1 I/O 0 missing. Check status and cabling to cage1 I/O 0
Cabling cage2 I/O 0 missing. Check status and cabling to cage2 I/O 0
Cage cage:0 Firmware is not current
Cage cage:0 Missing B loop
Cage cage:1 Firmware is not current
Cage cage:1 Missing B loop
Cage cage:2 Firmware is not current
Cage cage:2 Missing B loop
File node:1 sr_mnt not mounted
File node:1 Behavior altering file "CLI_INJECT_ALLOWED" exists, created on May 19 15:14
File node:master Admin Volume is not mounted
LD -- Preserved data storage is not enabled
LD -- Number of logging LDs does not match number of nodes in the cluster
LD -- Preserved data storage space is less than total node's Data memory
License -- No license has been entered.
Network -- Nodes have no admin network link detected
Node node:0 Node is not online
PD -- The number of disks in the system exceeds the licensed capacity
PD Cage:0 PDs FC/10K/0GB unbalanced. Primary path: 20 on Node:1, 0 on Node:0
PD Cage:0 PDs SSD/150K/0GB unbalanced. Primary path: 4 on Node:1, 0 on Node:0
PD Cage:1 PDs FC/10K/0GB unbalanced. Primary path: 24 on Node:1, 0 on Node:0
PD Cage:2 PDs FC/10K/0GB unbalanced. Primary path: 24 on Node:1, 0 on Node:0
PD disk:5000C50072633DE0 Degraded States: missing_B_port
PD disk:5000CCA013203ADF Degraded States: missing_B_port
PD disk:5000CCA013204F37 Degraded States: missing_B_port
PD disk:5000CCA0132054AB Degraded States: missing_B_port
PD disk:5000CCA01322B24F Degraded States: missing_B_port
PD disk:5000CCA0163970E7 Degraded States: missing_B_port
PD disk:5000CCA01650A0D3 Degraded States: missing_B_port
PD disk:5000CCA01650BEDF Degraded States: missing_B_port
PD disk:5000CCA01650DCA3 Degraded States: missing_B_port
PD disk:5000CCA01650E067 Degraded States: missing_B_port
PD disk:5000CCA0165213BB Degraded States: missing_B_port
PD disk:5000CCA0165216D7 Degraded States: missing_B_port
PD disk:5000CCA016522077 Degraded States: missing_B_port
PD disk:5000CCA016535D9B Degraded States: missing_B_port
PD disk:5000CCA01655FE77 Degraded States: missing_B_port
PD disk:5000CCA01655FEAB Degraded States: missing_B_port
PD disk:5000CCA01655FFCB Degraded States: missing_B_port
PD disk:5000CCA0165603F7 Degraded States: missing_B_port
PD disk:5000CCA0165605AF Degraded States: missing_B_port
PD disk:5000CCA016560717 Degraded States: missing_B_port
PD disk:5000CCA016560C47 Degraded States: missing_B_port
PD disk:5000CCA016562E57 Degraded States: missing_B_port
PD disk:5000CCA016562EEB Degraded States: missing_B_port
PD disk:5000CCA016562F2F Degraded States: missing_B_port
PD disk:5000CCA016562F9B Degraded States: missing_B_port
PD disk:5000CCA0165631BF Degraded States: missing_B_port
PD disk:5000CCA0165631C7 Degraded States: missing_B_port
PD disk:5000CCA0165634CB Degraded States: missing_B_port
PD disk:5000CCA016563643 Degraded States: missing_B_port
PD disk:5000CCA016563807 Degraded States: missing_B_port
PD disk:5000CCA0165638D3 Degraded States: missing_B_port
PD disk:5000CCA022396D33 Degraded States: missing_B_port
PD disk:5000CCA02240AE8F Degraded States: missing_B_port
PD disk:5000CCA0224875EB Degraded States: missing_B_port
PD disk:5000CCA02248BE37 Degraded States: missing_B_port
PD disk:5000CCA0224D389F Degraded States: missing_B_port
PD disk:5000CCA0224EC5DB Degraded States: missing_B_port
PD disk:5000CCA0224ED417 Degraded States: missing_B_port
PD disk:5000CCA0224ED41B Degraded States: missing_B_port
PD disk:5000CCA0224ED79F Degraded States: missing_B_port
PD disk:5000CCA022516723 Degraded States: missing_B_port
PD disk:5000CCA022516A43 Degraded States: missing_B_port
PD disk:5000CCA022574183 Degraded States: missing_B_port
PD disk:5000CCA022777FAF Degraded States: missing_B_port
PD disk:5000CCA04316A977 Degraded States: missing_B_port
PD disk:5000CCA0550E37EB Degraded States: missing_B_port
PD disk:5000CCA0550E480B Degraded States: missing_B_port
PD disk:5000CCA0550E49A3 Degraded States: missing_B_port
PD disk:5000CCA0550E49FB Degraded States: missing_B_port
PD disk:5000CCA0550E5BF7 Degraded States: missing_B_port
PD disk:5000CCA0550E6ED3 Degraded States: missing_B_port
PD disk:5000CCA0550E8647 Degraded States: missing_B_port
PD disk:5000CCA0550E864F Degraded States: missing_B_port
PD disk:5000CCA0550E89E3 Degraded States: missing_B_port
PD disk:5000CCA0550E8A43 Degraded States: missing_B_port
PD disk:5000CCA0550E8F2F Degraded States: missing_B_port
PD disk:5000CCA0550E9347 Degraded States: missing_B_port
PD disk:5000CCA0550E95D3 Degraded States: missing_B_port
PD disk:5000CCA0550EE723 Degraded States: missing_B_port
PD disk:5000CCA0550EE9BF Degraded States: missing_B_port
PD disk:5000CCA0550F0317 Degraded States: missing_B_port
PD disk:5000CCA06F01D20B Degraded States: missing_B_port
PD disk:5000CCA06F01F12F Degraded States: missing_B_port
PD disk:5000CCA06F01F19B Degraded States: missing_B_port
PD disk:5000CCA06F01FB4F Degraded States: missing_B_port
PD disk:5000CCA06F0201EF Degraded States: missing_B_port
PD disk:5000CCA06F020243 Degraded States: missing_B_port
PD disk:5000CCA06F0205D7 Degraded States: missing_B_port
PD disk:5000CCA06F0205DF Degraded States: missing_B_port
PD disk:5000CCA06F0205EB Degraded States: missing_B_port
PD disk:5000CCA06F0206EF Degraded States: missing_B_port
PD disk:5000CCA06F05173F Degraded States: missing_B_port
Port port:1:2:4 Mismatched mode and type
QoS -- Unable to check QoS - This system is not licensed for System Reporter features

3PAR Console Menu 1638088-1 3.1.2.422

1. Out Of The Box Procedure
2. Re-enter network configuration
3. Update the CBIOS
4. Enable or disable CLI error injections
5. Perform a Node-to-Node rescue
6. Set up the system to wipe and rerun ootb
7. Cancel a wipe
8. Perform a deinstallation
9. Update the system for recently added hardware (admithw)
10. Check system health (checkhealth)
11. Exit


I tried a full system wipe but the problem still occur.
What i am doing wrong mates :(


Top
 Profile  
Reply with quote  
 Post subject: Re: 3PAR OOTB Failure
PostPosted: Wed May 20, 2020 9:58 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1063
Location: Europe
Cages are only connected to node1, they should be connected to both.

Also, you should have less than 6x of each drive type (ssd/fc/nl) in a system.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: 3PAR OOTB Failure
PostPosted: Fri May 22, 2020 4:09 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1254
Location: Dallas, Texas
MammaGutt wrote:
Cages are only connected to node1, they should be connected to both.

Also, you should have less than 6x of each drive type (ssd/fc/nl) in a system.


I think MammaGutt meant to say "no less".

Looks like your SAS cabling is not right.

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 


Who is online

Users browsing this forum: Bing [Bot] and 29 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt