ceph (with examples)
- Linux
- November 5, 2023
1: Check cluster health status
ceph status
Motivation: Checking the cluster health status helps administrators identify any potential issues or problems with the Ceph storage system. It provides a quick overview of the overall health and functionality of the cluster.
Explanation: The ceph status
command is used to display the current status of the Ceph cluster. It provides information about the number of OSDs (Object Storage Daemons), the number of Placement Groups (PGs), the status of the cluster services, and any warning or error messages.
Example Output:
cluster:
id: 12345678-90ab-cdef-1234-567890abcdef
health: HEALTH_OK
services:
mon: 1 daemons, quorum a (age 1d)
mgr: a(active, since 1d)
osd: 3 osds: 3 up (since 1d), 3 in (since 1d)
data:
pools: 3 pools, 300 pgs
objects: 1000 objects, 600 MiB
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs: 300 active+clean
2: Check cluster usage stats
ceph df
Motivation: Monitoring the cluster usage statistics helps administrators keep track of available storage, identify potential capacity issues, and plan for future expansion or optimization.
Explanation: The ceph df
command provides detailed information about the disk space utilization in the Ceph cluster. It includes the total capacity, used space, available space, number of objects, and the utilization percentage for each storage pool.
Example Output:
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED % RAW USED
ssd 1.0 TiB 400 GiB 600 GiB 900 GiB 90%
hdd 5.0 TiB 2.4 TiB 2.6 TiB 5.0 TiB 100%
POOLS:
NAME ID USED %USED OBJECTS
pool1 1e 150 GiB 25% 500
pool2 2d 400 GiB 40% 1200
pool3 3f 300 GiB 30% 800
pool4 4a 100 GiB 10% 400
3: Get statistics for placement groups in a cluster
ceph pg dump --format plain
Motivation: Understanding the distribution and state of placement groups (PGs) within the cluster can help administrators assess the load balancing, identify uneven distribution, and detect potential performance bottlenecks.
Explanation: The ceph pg dump --format plain
command retrieves a detailed representation of the placement groups in the Ceph cluster. It provides information about the PG ID, state, acting and up OSDs, primary OSD, and the number of objects within each PG.
Example Output:
PG_STAT OBJECTS DEGRADED MISPLACED UNFOUND BYTES LOG DISK LOG OSDS STATES
1 0 0 0 0 0 0 active+clean
2 10 0 0 10 KB 12 KB 3 active+clean
3 1000 10 100 50 MB 150 MB 3 active+clean
4 10000 10 50 100 GB 600 GB 3 active+clean
4: Create a storage pool
ceph osd pool create pool_name pg_num
Motivation: Creating storage pools allows administrators to organize and manage data in a Ceph cluster. They can allocate specific resources and set different storage policies for different applications, clients, or data types.
Explanation: The ceph osd pool create
command creates a new storage pool in the Ceph cluster with the specified name and number of Placement Groups (PGs). The pool_name
argument specifies the desired name for the pool, and the pg_num
argument defines the number of PGs to create for the pool.
Example Code:
ceph osd pool create mypool 128
5: Delete a storage pool
ceph osd pool delete pool_name
Motivation: Deleting a storage pool helps administrators free up resources, reclaim disk space, or remove unnecessary data from the cluster. It is useful when a pool is no longer needed or when it needs to be reconfigured.
Explanation: The ceph osd pool delete
command permanently removes a storage pool from the Ceph cluster. The pool_name
argument specifies the name of the pool to be deleted.
Example Code:
ceph osd pool delete mypool
6: Rename a storage pool
ceph osd pool rename current_name new_name
Motivation: Renaming a storage pool allows administrators to update pool names to better reflect their purpose or to align with the changing needs of the environment. It simplifies management and ensures consistency within the cluster.
Explanation: The ceph osd pool rename
command changes the name of an existing storage pool in the Ceph cluster. The current_name
argument specifies the current name of the pool, and the new_name
argument defines the desired new name for the pool.
Example Code:
ceph osd pool rename oldpool newpool
7: Self-repair pool storage
ceph pg repair pool_name
Motivation: Self-repairing pool storage helps to maintain data integrity and reliability in the Ceph cluster. It identifies and resolves any inconsistencies or errors within the placement groups, ensuring that data is accessible and intact.
Explanation: The ceph pg repair
command initiates a self-repair process for a specific storage pool in the Ceph cluster. The pool_name
argument specifies the name of the pool that needs repair. This command can be used when there are reported errors or inconsistencies within the pool.
Example Code:
ceph pg repair mypool
Conclusion
In this article, we explored different use cases of the ceph
command. We covered how to check the cluster health status, monitor cluster usage statistics, retrieve placement group statistics, create, delete, and rename storage pools, as well as initiate self-repair for pool storage. These examples demonstrate the versatility and functionality of the ceph
command in managing and maintaining a Ceph storage cluster.