File lpar2rrd.changes of Package lpar2rrd

Tue Apr  1 15:09:23 UTC 2014 -

- updated to version 4.01
- bugfix release
  * OS CPU in historical reports always showed last month graph
  * OS mem graphs had 100GB limitation for input of memory
    used/free/cache values (when more than 100GB then NaN was in graphs)
  * LPAR2RRD daemon and fix conversion of old OS agent memory data
    into new v4.00 format on CentOS platform (might be
    even other Linux platforms could be affected)
  * Global historical reports : when a LPAR which has no OS
    agent was selected then did not appear the graph

Tue Apr  1 13:38:12 UTC 2014 - root@localhost

- updated to version 4.00
- Operating System agent enhancements.
- OS agent v4.00 is able to monitor now:
  * OS CPU utilization of user/sys/IO wait/idle in %
  * Memory utilization of used/pinned/fs cache/free memory in MB (v3.60)
  * Paging rate in MB/sec (v3.60)
  * Paging space utilization in %
  * SAN (fiber channel) throughput in MB/sec
  * SAN (fiber channel) throughput in IO/sec
  * LAN (ethernet) throughput in MB/sec
  * SEA (Shared Ethernet Adapter) throughput in MB/sec (only on VIO servers)
  * AME (Active Memory Expansion) allocation
  * Report paging activity which exceeds a threshold via alerting module (v3.60)
- OS agent now supports:
  * AIX 5.1+
  * Linux on Power
  * It is able to report CPU utilization for CPU dedicated LPARs
    (it is not possible monitor them via the HMC)
  * LPARs not being managed by the HMC (targeted mainly to old POWER4/5 full partition stuff)
- Configuration Advisor enhancements:
  * Memory Configuration Advisor: a new feature which makes
    recommendations for memory changes based on the OS agent
    data (memory utilization, paging rate and allocation)
  * CPU advisor: it reports unused entitlement which is possible to
    reduce without affecting of anything
  * CPU advisor: output can be exported into CSV file

- Historical reports works with the OS agent data
- Paging rate aggregated graphs per the server (frame).
- It brings you easy possibility to check on 1 click if any
  LPAR is paging or has paged in the past. EXAMPLE....
- GUI has been re-designed with usage of java script features like tabs.
- There will be a brand new GUI in the next version.
- "LPAR search" supports regular expressions

Tue Jan 14 15:39:30 UTC 2014 -

- minor update to version 3.61
- fix of 3.60 where was limit for all
  memory agent related stored values 100GB

Mon Nov 11 00:33:10 UTC 2013 -

- Operating System agent ( )
  for retrieving memory and paging utilization data from LPARs.
  It brings following:
    - OS memory usage graphs (LPAR detail page)
    - Paging activity graphs (LPAR detail page)
    - Paging activity alerting via alerting module
- Custom groups now show together with CPU graphs
  also aggregated memory graphs
- LPAR groups: allocated memory and OS memory graphs (if the
  OS agent is running there)
- CPU pools: memory capacity for "ALL CPU Pools"
- Data health check: once a day is checked if all
  data is regularly stored. List of all LPARs or CPU pools not
  being updated for more than 24h can be found:
  LPAR2RRD menu section --> Data check.
- Custom groups - CPU pools: the line has been added with sum of
  all POOL entitlements (useful especially for licensing purposes)
- Capacity on Demand graphs if that feature is used
- LPAR search: now searches even in current profile names
  (results are bellow LPAR name search results)
- RMC check: there is once a day checked if RMC connection to all
  LPARs is ok or not (Global --> RMC check)
- LPARs aggregated graphs per a HMC (useful for systems with CPU
  dedicated partitions where usual "Total CPU util per HMC/SDMC"
  does not show dedicated partitions)
- Summary of running LPARs and servers per a HMC is counted and
  graphed once a day (Global --> HMC totals)
- Global "Historical reports" has been enhanced about CPU pool
  historical graphs (only LPARs had been there before)  

Mon Sep  2 12:56:05 UTC 2013 -

- CPU Configuration Advisor automatically verifies CPU logical
  setup of all your LPARs and POOLs based
  on historical utilization data
- Every graph is now a link to a pop-up window which appears 
  in 3 times bigger format with the same data
- All 4 graphs of each LPAR have now the same upper limit
- PureFlex support
- Views are again always actual like it had been before 3.40 
  (since 3.40 views have been actualized once a day)
- Has been a bit redesigned layout of graphs which
  produces CPU Workload Estimator
- There is possibility to attach a picture with
  actual CPU load to each email alert.
- It is configurable in etc/alert.cfg via EMAIL_GRAPH
   parameter : 0 - false , last 1 - X hours included in the graph
- And many other small changes and bug fixes ...
- Runtime is now much faster (depends on your environment
  it can be 4 - 5 times!). Main reason is that all lpar
  graphs are not created in advance but on demand
  via web CGI-BIN when the lpar is selected.
- There has been fixed many data related issues.
  The result is that the product should be much more robust
  even against various HMC data inconsistencies.
- If you see time to time some gaps in your data then definitely
  go for this upgrade!
- Has been fixed long data load of servers with hundreds of lpars.
  It could take hours under particular conditions. Now it takes minutes.
- The product now supports lpar rename! When you rename a lpar
  then LPAR2RRD recognizes that and renames that internally as well
  so the new lpar name will have data history of the original lpar name.
- CPU Workload Estimator enhancement which now allows estimations
  for new servers (complete IBM Powerâ„¢ product line).
- Active Memory Sharing support. It is running on Live demo site.
- Memory aggregated graphs Apart of standard static memory graph
  there is a new one with aggregations of all lpars on the server.
- Has been significantly enhanced list of parameters which can be
  exported to CSV from Physical and Logical configuration.
- Has been verified and tested Linux hosting of
  LPAR2RRD, it is now fully functional
- Custom group: summary line with total CPU for all pools/lpars
  is printed bellow each graph
- Possibility to export physical and logical configuration into CSV
- Colors for aggregated graphs have been changed to
  better differentiate between partitions

Sat Nov 10 22:05:11 UTC 2012 -

- update to latest upstream 3.20
  * Custom groups feature
  * LPAR2RRD has now build-in alerting feature.
  * Favourites feature allows you choice typically most important
    or most often viewed CPU pools or lpars and place them
    into separated menu for quick access.

Sat Aug 18 23:02:45 UTC 2012 -

- update to latest upstream 3.15
  * Introduction of CPU Workload Estimator
  * CPU shared pools names are showed in Historical reports instead
    of generic "CPU pool 1" ...
  * fixed wrong retention of 5mins average in RRDTool, it was 45day, should be 90
  * SDMC lpar id to name translation did not work when there was more
    than 1 server under sdmc
  * When was deleted last CPU pool then it lead to lpar2rrd
    fail during load data as it could not map old data from that pool

Thu May 31 12:10:10 UTC 2012 -

- Live Partition Mobility and generally lpar migration support
- Top10 page, new "Physical and Logical cfg",
  aggregated graphs per CPU pools

Tue Jan 17 08:09:19 UTC 2012 -

- new upstream version
  - There are yearly trend graphs apart from the standard 
    daily/weekly/monthly/yearly that predict CPU usage of 
    servers/CPU pools/lpars 1 year ahead. 

Thu Jan  5 15:31:55 UTC 2012 -

- initial packaging atempt...