Search This Blog

Showing posts with label switch. Show all posts
Showing posts with label switch. Show all posts

Wednesday, June 11, 2014

Switch NVRAM no space



Today I encountered very strange error while trying to save switch configuration:

Switch#wr
Building configuration...

% Warning: Saving this config to nvram may corrupt any network management or security files stored at the end of nvram.
Continue? [no]:
% Configuration buffer full, can't add command: ntp clock-period 36028830
%Aborting Save. Compress the config.[OK]


This error indicate that there is no free space on the NVRAM for saving the running-configuration into the startup configuration, as you may recall the nonvolatile random-access memory (NVRAM) is an EEPROM chip which holds the startup configuration file and retains content when router is powered down or restarted.

In order to view the contents of the NVRAM just type:

Switch#dir nvram:
Directory of nvram:/

   36  -rw-       20748                    <no date>  startup-config
   37  ----        6592                    <no date>  private-config
    1  -rw-         657                    <no date>  IL-SW-UC-2H-#3801.cer

65536 bytes total (37120 bytes free)



As you can see there is only 37KB free on the NVRAM and my configuration weight a little bit more:

Switch#sh running-config
Building configuration...

Current configuration : 38563 bytes
!
! Last configuration change at 14:27:26 gmt Sun Jun 8 2014 by xxx
! NVRAM config last updated at 14:28:46 gmt Sun Jun 8 2014 by xxx
!
version 12.2
service nagle
no service pad
service tcp-keepalives-in
<OUTPUT OMMITED>

There is an option to use service compress-configuration which will compress the startup configuration but it’s valid only for higher series switches (such as the 45xx and 65xx).
On my 2960 switch it will give the following output:

Switch(config)#service compress-config
Boot ROMs do not support NVRAM compression.
Disabling service compress-config.

Switch(config)#


So in order to solve this issue I had to remove some configuration lines from the current configuration, after that I was able to save the running-config to the startup:

Switch#wr
Building configuration...
[OK]







Sunday, November 17, 2013

Cisco switch Hulc LED process




While working on Cisco c2960-S I noticed a relative high CPU utilization from a process called: Hulc LED process. This process is responsible for port link detection.

Note the process utilization:

SW# sh processes cpu sorted 1min
CPU utilization for five seconds: 20%/1%; one minute: 21%; five minutes: 20%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
 138    46080263   8555487       5386 10.89% 10.69% 10.76%   0 Hulc LED Process
 292       67117       343     195676  0.00%  1.33%  0.33%   0 hulc running con
 107     3052131    402562       7581  1.09%  0.76%  0.74%   0 hpm counter proc
   4     3291391    183587      17928  0.00%  0.58%  0.71%   0 Check heaps      
 193     1430354   3734026        383  0.39%  0.53%  0.54%   0 Spanning Tree   
 102        1275       506       2519  1.29%  0.52%  0.25%   1 SSH Process     
 207      787430    402585       1955  0.29%  0.29%  0.28%   0 PI MATM Aging Pr
 147      859812     81139      10596  0.19%  0.19%  0.19%   0 HQM Stack Proces
   8      718295      6766     106162  0.00%  0.13%  0.13%   0 Licensing Auto U
 103      562016   5191334        108  0.09%  0.12%  0.14%   0 hpm main process
  69      365493   1865819        195  0.09%  0.07%  0.09%   0 RedEarth Tx Mana
 178       26413    136403        193  0.00%  0.06%  0.03%   0 IP Input        
  70      275384  17099996         16  0.00%  0.05%  0.05%   0 RedEarth Rx Mana
 277      101138      4403      22970  0.00%  0.04%  0.02%   0 VLAN Manager    
  10      281585    819737        343  0.00%  0.03%  0.05%   0 ARP Input       
 167      126435     79293       1594  0.00%  0.03%  0.02%   0 CDP Protocol    
 148       89198    162239        549  0.00%  0.03%  0.00%   0 HRPC qos request
  51      112753     81121       1389  0.09%  0.02%  0.00%   0 Compute load avg
  90       79497  11382877          6  0.00%  0.02%  0.00%   0 HLFM address ret
  38      137240      6864      19994  0.00%  0.02%  0.00%   0 Per-minute Jobs 
  64       73121  14914414          4  0.00%  0.02%  0.00%   0 Draught link sta
 290       25395    184969        137  0.00%  0.01%  0.00%   0 LACP Protocol   
 130       78811   1986300         39  0.00%  0.01%  0.00%   0 Hulc Storm Contr
  89       56164    402586        139  0.09%  0.00%  0.00%   0 HLFM aging proce
 228       53656    405943        132  0.00%  0.00%  0.00%   0 Socket Timers   
 139       24168    297706         81  0.00%  0.00%  0.00%   0 HL3U bkgrd proce
 233       25970    817312         31  0.00%  0.00%  0.00%   0 DHCPD Receive   
  88      142355  11363812         12  0.00%  0.00%  0.00%   0 HLFM address lea
  61        8382   1609843          5  0.00%  0.00%  0.00%   0 HUSB Console    
 180        5492    788015          6  0.00%  0.00%  0.00%   0 IP ARP Track    
  94        4991    135310         36  0.00%  0.00%  0.00%   0 HVLAN main bkgrd
         

In order to solve this issue you will have to shut down (administratively shut) all ports which are not connected.

The result:

CPU utilization for five seconds: 8%/0%; one minute: 9%; five minutes: 8%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
 107     3075484    405876       7577  0.69%  0.68%  0.65%   0 hpm counter proc
 193     1449905   3757786        385  0.49%  0.49%  0.49%   0 Spanning Tree   
 102         883       263       3357  0.69%  0.44%  0.20%   1 SSH Process     
   4     3318268    185088      17928  0.00%  0.44%  0.64%   0 Check heaps     
 207      796688    405899       1962  0.29%  0.26%  0.27%   0 PI MATM Aging Pr
 138    46115118   8634102       5341  0.09%  0.25%  0.22%   0 Hulc LED Process
   8      724256      6822     106164  0.00%  0.19%  0.13%   0 Licensing Auto U
 147      866823     81805      10596  0.19%  0.19%  0.19%   0 HQM Stack Proces
 167      127960     80072       1598  0.09%  0.06%  0.01%   0 CDP Protocol    
 178       26818    138168        194  0.29%  0.06%  0.01%   0 IP Input        
  51      113897     81787       1392  0.09%  0.04%  0.01%   0 Compute load avg
  10      283352    825517        343  0.00%  0.03%  0.04%   0 ARP Input       
  38      138421      6921      20000  0.00%  0.03%  0.00%   0 Per-minute Jobs 
 103      563737   5237463        107  0.00%  0.02%  0.02%   0 hpm main process
  64       73760  15040205          4  0.00%  0.01%  0.00%   0 Draught link sta
 148       89937    163573        549  0.00%  0.01%  0.00%   0 HRPC qos request
  33       67699    473302        143  0.00%  0.01%  0.00%   0 Net Background  
  90       79862  11478315          6  0.00%  0.00%  0.00%   0 HLFM address ret
  70      276251  17256414         16  0.00%  0.00%  0.00%   0 RedEarth Rx Mana
 139       24327    300155         81  0.09%  0.00%  0.00%   0 HL3U bkgrd proce
  37      197463    406062        486  0.00%  0.00%  0.00%   0 Per-Second Jobs

You can see that the CPU utilization has decreased significantly.