Ambedded

Google+
|  My Cart
Home  >  Product  >   Unified Virtual Storage (UVS) Manager

Unified Virtual Storage (UVS) Manager

Unified Virtual Storage Manager
UVS 2.0

UVS Manager Software Stack
Unified Virtual Storage  Manager (UVS) UVS Manager Software Stack
Unified Virtual Storage  Manager (UVS)

 

OVERVIEW

Unified Virtual Storage (UVS) Manager is a web-based graphical user interface (GUI) enabling administrators to manage Ceph software defined storage (SDS) on Ambedded Technology’s Mars series appliances. Ceph is open source software implementing SDS object storage on a single distributed cluster, providing interfaces for object-, block- and file-level storage. Ceph aims for distributed operations without single points of failure, scalable to the Exabyte capacities. Ceph replicates data making it fault tolerant, using commodity hardware. As a result of its design, the Ceph system is both self-healing and self-managing, minimizing administration and associated costs. Ambedded’s Mars series integrate decentralized ARM-based microservers with open source Ceph software and web-based UVS, distributing the device as an SDS appliance. This perfect integration enables customers to easily use, scale and manage storage.

 

Manage the Ceph SDS Cluster

Ceph is a powerful product, providing a unified SDS platform with scalability and high availability. However, Ceph management is very complicated. UVS manager provides system administrators a straightforward GUI to automate the sophisticated command line interface (CLI) to avert human error. System administrators can manage the Ceph cluster after just a few hours of training. Compared to other Ceph products – many of which provide only CLI or limited dashboard functionality – UVS reduces administration dramatically. This results in lower operating costs through greater administrative productivity.

 

Performance & Stability Tuning

In addition to the management interface, the UVS backend optimizes Ceph parameters on the Mars platform to drive performance with stability. This saves administration time on tuning, expediting new Ceph cluster deployment.

 

AT A GLANCE

icon Web-Based Ceph appliance management tool
icon Deploy a Ceph cluster
icon Create MON
icon Replica and Erasure code
icon Pool management
icon RBD image
icon Create CephFS
icon OSD and pool usage
icon Rados gateway & users
icon OpenStack pools
icon Live migration
icon Clear Dash board
icon Cerate & manage NTP
icon Cerate and manage OSD, MDS
icon CRUSH Map, Bucket, Rule set
icon Cache tiering
icon Snapshot, clone & flatten
icon CephX key and user capability
icon Audit log and notification
icon Multi-site DR
icon iSCIS Gateway and LUN
   

 

 

KEY FEATURES

Cluster & NTP Server Deployment

icon Deploy the first Monitor and OSD to bring up Ceph cluster from scratch.
icon Setup NTP server: Ceph allows very small clock skew between nodes.
icon NTP options may create an NTP server on MON node or use an existing NTP server.
icon A single click can push the NTP setting to each Ceph nodes.

Dashboard

icon The dashboard provides a graphical cluster informations.
icon Ceph cluster status
icon Warning and error messages
icon OSD and MON status
icon Placement group healthy status
icon Cluster capacity usage
icon Throughput metrics
Cluster & NTP Server Deployment

 


 

MON/OSD Management

Monitor and OSD are two important daemons forming a Ceph cluster. With UVS, administrators can easily manage these daemons with following functions.
icon MON create, restart and reboot
icon OSD creates, restart, reboot and remove
icon Add multiple OSDs
icon MON and OSD network and healthy status
icon OSD disk SMART information
MON/OSD Management

 


 

Pool Management & Cache Tiering

The Pool is the basic storage resource, storing objects written by clients. Administrators create Pools with data protection either via replication or erasure coding. CRUSH rules configure the Pool’s failure domain.
icon Pool create/delete
icon Pool configuration: Name, Replica/Erasure Code, Quota, CRUSH Rule, Placement Group
icon Cache tiering: with different speed pools, a faster pool can be set as the cache tier of a slower pool.
Pool Management & Cache Tiering

 


 

CRUSH Map Configuration

Ceph uses CRUSH algorism to distribute and store replicated data and erasure coding chunks to the configurable failure domain. CRUSH requires a map to avoid single point of failure, performance bottleneck and scalability limitations. UVS enables configuration of the CRUSH map and rule sets.
icon Create/Delete bucket: root, rack, chassis
icon Move host: Assign hosts to their chassis
icon List and create CRUSH Rules
icon Graphical CRASH map

 


 

RBD Images Management & Snapshot

RBD images are block devices striped over objects and stored in its backend Ceph pool. UVS creates images with specific backend pools. Other UVS management tasks include.
icon Create and deleting image
icon Assign image object size
icon Size and resize image
icon Snapshot, clone and flatten images
icon List images with their name, image size, objects size and watchers (users).
RBD Images Management & Snapshot

 


 

Erasure Code Profile Management

Before creating an erasure code pool, Administrators creates an Erasure Code profile with specified object Data Chunk (K) and Coding Chunk (M) values, and a failure domain. UVS makes this quite straightforward.
Erasure Code Profile Management

 


 

Client User Access Control

Ceph requires authentication and authorization via username / keyring. UVS manages user access and creates the associated keyring, which administrators can download after creation.
Client User Access Control

 


 

Usage Detail

Usage detail lists the size, weight, use percentage and availability of each root, rack, chassis and host/disk. Pool usage data such as used, use percentage, maximum capacity and number of object are listed.
Usage Detail

 


 

Object Storage

UVS manager supports the use of object storage. Applications can access the object storage through Amazon S3 and OpenStack Swift compatible API through the RADOS gateway.

Administrators use UVS to create a multi-site RADOS gateway for Active-Active disaster recovery.

UVS provides the following object storage features
Object Storage
icon Creating RADOS gateway either standalone or Multisite Master/Secondary.
icon Creating RADOS gateways on X86 servers.
icon Configuring RADOS gateway pools
icon Number of replication
icon Number of placement group
icon Changing CRUSH rule set
icon Changing replica to erasure code or vice versa
icon Creating/Deleting S3/Swift users and access keys

 


 

OpenStack

Cloud platforms like OpenStack require storage that is reliable, scalable, unified and distributed. Ceph integrates easily with OpenStack components like Cinder (block), Manila (file), Swift (object), Glance (images), Nova (VM virtual disks) and Keystone (identity). UVS OpenStack options allow administrators to create Pools and keys to use with OpenStack with a single click. UVS generates the Ceph client keyring for client.glance, client.cinder, client.nova and ceph.conf, supporting download and copy client access.

OpenStack

OpenStack

 


 

iSCSI

This feature helps to create iSCSI gateways on external servers or internal MON nodes and manage iSCSI LUNs with CHAP and ACL authentication.
iSCSI

 


 

Audit Log

An Audit Log automatically tracks every user action on the Ceph cluster. It records user Log on time, action performed, and resulting status. The logs can be forwarded to external Syslog servers.
Audit Log

 


 

Notification – Alerts on e-mail

Administrators can configure UVS to send Ceph warning and error messages to multiple email addresses, removing the need for constant Dashboard monitoring. When problems arise, users receive email alerts.
Notification – Alerts on e-mail

 


 

UVS User Management

Administrator can create many UVS manager users with configurable name, password, and access level. Users can be either full function administrators or viewers only. Users can change their password afterward.
UVS User Management

 


 

Firmware Update

Use Firmware Update function to upload the Ambedded update file released to one of MON nodes, pushing it to all nodes in the cluster with a single click. Firmware updates do not disrupt storage operations.
Firmware Update