|  My Cart
Home  >  News  >   UVS Manager v2.14.16 Release


UVS Manager v2.14.16 Release

Ambedded Technology, 2020 January 2nd.
We are glad to announce the release of UVS manager v2.14.16. This is our first release for Ceph Nautilus v14.2.5. There are a lot of changes to meet the Ceph Nautilus release. This UVS manager release adds many new features and enhancements. Please read this release note carefully before you upgrade the Ceph cluster.
Major Changes from v2.12.15
* Upgrade Ceph to Nautilus.
Some awesome features are listed as follows. Please refer to the Ceph Nautilus Release Notes.
   * New Cep​h dashboard
   * You can decrease the number of placement groups at any time, and the cluster can automatically tune the PG count.
   * New erasure code plugin CLAY is available for reducing the recovery time.
* UVS manager new features
   * Add OSD with encryption
   * Parallelly deploy multiple OSD to reduce the time for deployment
   * Safely remove OSD by moving OSD to trash before destroying
   * Control LEDs of storage device location for finding the devices easily
   * Change host network MTU (Maximum Transmission Unit) by UVS manager
   RADOS Gateway
   * Support SSL encryption on RADOS gateways (https)
   * Users are able to select the CRUSH rule while creating the RADOS Gateway
   * Automatically create all pools while creating RADOS gateway. This helps in planning the number of placement groups.
   Ceph File System
   * Automatically deploy standby MDS(Metadata Server) on Monitor nodes after creating active MDS.
   * Add MDS fail-back features for easily fail-back a fixed MDS to active mode.
   CRUSH Map
   * Users can edit and rename the CRUSH types to meet the data canter infrastructure with up to ten levels of CRUSH map hierarchy. 
   * CRUSH rule and Erasure code profile support selecting the device class 
* Other Enhancements
   * Use ETCD distributed database to synchronize the data between multiple UVS manager services and reduce the web page loading time. Improve the UVS Manager Node Page load time to be much faster
   * Avoid UVS manager notification, users, and login being visible while the user is editing the password
   * Enable the failover of UVS notification
   * Add Generate and Download Diagnostic Log
   * Automate firmware upgrade with the flexibility of grouping multiple nodes to upgrade sequentially or parallelly. The upgrade procedure is customized for each UVS version upgrade. Allow the user to use characters other than a-z as UVS user name.
   * Allow the user to use special characters other than a-z and up to 128 characters as the password
   * Improve the Ansible Execution Performance
   * Add Disk Diagnostics to admin Console
   * Change the node ethernet bonding mode to mode 6 balance-alb
   * Allow user to delete old update the upd and rpm files
   * Upgrade PHP from version 5 to PHP7 to get better performance.
Bugs Fixed
   * Create Multisite RGW will create extra pools if there is a standalone RGW exist. Correct the issue to use the same groups of pools.
   * CephFS active MDS needs another standby MDS for failover. UVS shall automatically create standby MDS when the user creates the active MDS.
   * Firmware Update fail if there is UNREACHABLE Ceph cluster's node
   * UVS manager display date/time in the NTP page always displays UTC+0. It shall use the same time zone as the NTP server set in the system.
   * Node Admin Console feature "Change Password" shall change the admin password instead of changing the root password.
   * When the user opens the RGW user management page, the UVS manager shall not trigger the creation of the rgw pool.
   * UVS manager shall show notification for duplicate inputs when the user creates multiple OSD or CRUSH map buckets at a time.
New network bonding mode option:
Since this UVS version, the admin console offers new bonding mode 6 balance-alb beside the old default mode 2 balance-xor.  
Mode 6 uses adaptive load balancing and support failover. It does not require any special switch support. Mode 6 is simpler for switch configuration, and it is stabler compared to mode 2. 
You don’t have to make any change for your in production Mars 400 cluster.
We recommend you to use bonding mode 6 for deploying new Mars 400 appliance. Mode 2 is still available for use.
The configuration of the network port on top of rack switch for Mar 400 with bonding mode 2 and mode 6 is different. Please refer to the following diagrams.
Feature Highlights
* Using the ETCD distributed database to improve the UVS manager performance.
Before v2.14.16, the UVS manager did not have a high availability database to keep the latest status of the Ceph cluster in the background. When the user switches the UVS manager to a new page, the UVS manager must query much information from all nodes. If the scale of the Ceph cluster becomes big, some pages in the UVS manager take much time to load. Introducing the ETCD database can automatically update the cluster status data. This new feature shortens the time to load a page a lot. The ETCD keeps the UVS manager status data synchronized between all Monitor nodes. Users can get the same information no matter he uses the UVS manager on which Monitor node. Also, the notification service on the UVS manager is high-available now because of the utilizing ETCD.
* Integrate CLAY erasure code to reduce OSD fail recovery time by 60%
Ceph’s default replication level provides excellent protection against data loss by storing three copies. However, storing 3 copies of data increases the cost of hardware as well as power consumption and cooling. 
Erasure code offers a solution similar to RAID 6, which consume less raw capacity and provide the same level of data protection as replica 3. The drawback of erasure code is taking a much longer time to recover data stored in failed disks. Shortening the time to recover is a big motivation to use erasure code.
In this release, we introduce the new Clay erasure code plugin. A benchmark test of recovery time on replica 3, Jerasure, and Clay erasure code pools shows Clay code improve 62% of recovery time compare to Jerasure code.
The disadvantage of the traditional erasure codes is their long recovering time. Clay (Coupled Layer) code offers a simplified construction for decoding/repair for Ceph. 
Clay code recovers OSD fail with much less time compared to Jerasure Code.
The following diagrams are the performance and recovery tests on 21x OSD with k=4, m=2, s=5. The performance test tool is RADOS bench.
* Parallelly create OSD for shortening the time for deployment
Deploying multiple OSD by UVS manager become much faster. The new release of the UVS manager deploys multiple OSDs parallelly instead of sequentially. For example, deploying 20 OSD takes only 21 minutes now. It took 100 minutes in the past.
* OSD encryption: Users have the option to encrypt OSD while deploying new OSDs. The OSD encryption utilizes the dm-crypt of Linux kernel. 
* Allow user to define CRUSH type name.
Ceph uses the CRUSH map and rule to define the data placement of a pool as a hierarchy. We use the CRUSH types to define the levels of the infrastructure of a hierarchy. The default CRUSH types are root, data center, room, row, pos, pdu, rack, chassis, and host. 
However, the administrator may want to use CRUSH types other than the default types. For example, racks are located in cages; cages are located on floors, floors in a building, and buildings in a data center. Customize the types has to decompile and recompile the CRUSH map.
Before v2.12.15, users can create the CRUSH map with types of rack and chassis.  Since v2.14.16, users can customize the CRUSH types easily in the web user interface with up to 10 levels of hierarchy. You can also rename the default buck types to any name suit for your real infrastructure. 



CRUSH Map Example: 



* LED control on UVS manager
Locate the failed disk drive shall be very careful. Mis-replace a faulty drive could make the situation even worse. In the next release of the UVS manager, the administrator could use the UVS manager to blink the LED of the chassis, and the LED beside the storage devices. This makes the replacement job easily and reduces the possibility of a human-made mistake. 
* Enabling user to download the log through UVS manager for troubleshooting
It is quite often to collect necessary log and system status while supporting customers to find out the issues. Instead of manually collect logs and check Ceph status, Ambedded implements a new feature that can automatically collect pieces of information need for troubleshooting. No more remote access and manually collect data are required.