It is moving towards the use of "ceph-deploy", but on the current version of Ubuntu this was getting me issues with host resolution, even though hosts and hostname are correct.
There's another page that uses the older method of creating a cluster, but this also creates problems when the OSD is to be started ( the one that saves your files ).
It did get me further. The link is here:
http://ceph.com/docs/dumpling/start/quick-start/
So I just followed that guide. When you make an error, you can't just remove the osd directories, because the keyring is copied along and then you get authentication issues. So on an error also remove the mds and mon and just rerun the mkcephfs command again.
I don't have a special partition available to use for ceph, so I just have files in /var/lib/ceph for now. When the service is restarted however, it complains about this:
Error ENOENT: osd.0 does not exist. create it before updating the crush map
One solution for this is to start the OSD's yourself:
ceph-osd -i 0 -c /etc/ceph/ceph.conf
That'd get you halfway there. You only need to do this once, afterwards the automated start script from ceph will work. The next thing is that ceph health shows issues, which is because the standard replication level is 3. This means you need a minimum of 3 servers to get items replicated and we just configured 2
On my machine, I don't activate replication, so I ran:
# ceph osd pool set data size 1
# ceph osd pool set metadata size 1
# ceph osd pool set rbd size 1
You can query all pools configured:
# ceph osd lspools
The other step is to configure a rados gateway so that it's possible to access files a la Amazon S3 style. There's some sites that claim they know how to do this, but I found this one here:
http://ceph.com/docs/dumpling/start/quick-rgw/
There should be a better way to do this for simple setups. For real clusters, I think issues should be a little bit easier as not everything is running on the same machine. I think that's causing some things to break here or there.
No comments:
Post a Comment