Ceph
Ceph
  • Видео 1 299
  • Просмотров 527 470
Ceph User + Dev Monthly Meeting 2024-06-20
Ceph User + Developer Monthly Meeting - a virtual platform that has long been in our community for encouraging discussion and collaboration between users and developers. The overarching goal of these meetings is to elicit feedback from the users, companies, and organizations who use Ceph in their production environments.
Просмотров: 141

Видео

MicroCeph from Development to Solutions | Ceph Days NYC 2024
Просмотров 26119 часов назад
In this talk, we will discuss MicroCeph and explore the various use cases for providing quick and simple Ceph storage - from the developer workstation, to CI systems, to edge computing and the data center.
Attempting to Improve Discard Performance | Ceph Days NYC 2024
Просмотров 7719 часов назад
While digging into RBD performance issues, it was observed that some fragmented OSDs were struggling to keep up with their normal performance, and they were struggling with their internal discard mechanism. Enabling discards in Ceph helped, but it still falls behind, as discards were single threaded. In this lightning talk, we'll discuss what DigitalOcean observed, and how we approached solutio...
How we Operate Ceph at Scale | Ceph Days NYC 2024
Просмотров 14119 часов назад
As clusters grow in both size and quantity, operator effort should not grow at the same pace. In this talk, Matt Vandermeulen will discuss strategies and challenges for operating clusters of varying sizes in a rapidly growing environment for both RBD and object storage workloads based on DigitalOcean's experiences.
Data Security and Storage Hardening in Rook and Ceph | Ceph Days NYC 2024
Просмотров 7919 часов назад
We explore the security model exposed by Rook with Ceph, the leading software-defined storage platform of the Open Source world. Digging increasingly deeper in the stack, we examine options for hardening Ceph storage that are appropriate for a variety of threat profiles.
Making RBD Snapshot-based Mirroring Robust for Disaster Recovery | Ceph Days NYC 2024
Просмотров 10919 часов назад
The feature to mirror RADOS block device (RBD) images across clusters by asynchronous replication of RBD snapshots was introduced a few years ago. It's been recently integrated into the disaster recovery (DR) solution for container workloads backed by RBD in OpenShift kubernetes environment. The integration and testing of the DR solution uncovered bugs and helped identify missing pieces in snap...
Ceph User + Dev Monthly Meeting 2024-05-23
Просмотров 17621 день назад
Ceph User Developer Monthly Meeting - a virtual platform that has long been in our community for encouraging discussion and collaboration between users and developers. The overarching goal of these meetings is to elicit feedback from the users, companies, and organizations who use Ceph in their production environments.
Ceph Data Placement with Upmap / Introducing Chorus | Ceph Days NYC 2024
Просмотров 334Месяц назад
Ceph data placement is a simple yet nuanced subject. Using it's secret sauce "CRUSH", Ceph empowers organizations to easily implement complex data placement policies in a way which maximizes the reliability of the storage infrastructure. However CRUSH is not perfect imbalances lead to costly space inefficiencies. To fix this, Ceph introduce the upmap balancer several releases ago. But the inter...
NVMe-Over-Fabrics Support for Ceph | Ceph Days NYC 2024
Просмотров 402Месяц назад
With the introduction of NVMe drives, data center and cloud workloads alike have benefitted from their increase performance, durability and parallelism. Combine these qualities with the scale and agility of Ceph block storage, leveraging existing networks as transport, and you've got a recipe for cost-effective success at scale, with a high level of reliability and redundancy. Come explore what...
Practical Business Ceph Examples | Ceph Days NYC 2024
Просмотров 215Месяц назад
ISS has run Ceph operations in business settings for 10 years with great success. Alex will present several use cases for midrange datacenter use: Proxmox VMs, SAN replacement, Backup, Kubernetes CSI, S3, RHEL/Pacemaker, NFS.
Community Initiatives and Improving Ceph through User Feedback | Ceph Days NYC 2024
Просмотров 96Месяц назад
We launched the creation of the Ceph User Council in March 2024. We want to leverage this workgroup to focus on Ceph users' experience and provide consistent and structured feedback to Ceph technical team. For this presentation, we want to give an update of the council and raise the aware of this initiative. Additionally, the Ceph Foundation Board would like to capture feedback from the Ceph us...
Designing a Multitenancy File System for Cloud Environment | Ceph Days NYC 2024
Просмотров 408Месяц назад
In the dynamic landscape of cloud technology, creating multitenant file systems to meet industry-specific needs presents a unique set of challenges. This talk details a project by 45Drives, aimed at developing a multitenant filesystem for a Fortune 100 client within the Media & Entertainment industry. The project faced considerable design challenges, primarily due to the traditional file system...
Ceph: A Journey to 1 TiB/s | Ceph Days NYC 2024
Просмотров 1,1 тыс.Месяц назад
Mark Nelson gives a talk at Ceph Days NYC 2024 about Ceph's Journey to 1TiB/s. This talk will cover the trials and joys of transforming an HDD based Ceph cluster to a high performance NVMe deployment and the level of performance we were able to achieve during testing. We'll cover hardware choices we made, how we ran tests, and how we tackled bottlenecks and bugs.
Designing and Tuning for All-Flash Ceph RBD Storage | Ceph Days NYC 2024
Просмотров 630Месяц назад
Tyler Stachecki speaks at Ceph days NYC about openStack cloud providers and operators. Ceph RBD integrates well with Cinder services and provides very reliable, highly performant block storage. Unfortunately, while Ceph RBD clusters are easy to get up and going, extracting the maximum possible performance from all-flash Ceph RBD clusters is a bit of a black art. A cursory online search might su...
Diving Deep with Squid | Ceph Days NYC 2024
Просмотров 424Месяц назад
Hosted by Josh Durgin and Neha Ojha at Ceph Days NYC 2024, this presentation was a look at the newest Ceph release, current development priorities, and the latest activity in the Ceph community. Website: ceph.io/en/community/events/2024/ceph-days-nyc/
Ceph Developer Monthly 2024-04-17
Просмотров 153Месяц назад
Ceph Developer Monthly 2024-04-17
Ceph RGW Refactoring Meeting 2024-03-06
Просмотров 1103 месяца назад
Ceph RGW Refactoring Meeting 2024-03-06
Ceph Code Walkthroughs: Crimson
Просмотров 3323 месяца назад
Ceph Code Walkthroughs: Crimson
Ceph RGW Refactoring Meeting 2023-02-21
Просмотров 1114 месяца назад
Ceph RGW Refactoring Meeting 2023-02-21
Ceph RGW Refactoring Meeting 2024-02-14
Просмотров 874 месяца назад
Ceph RGW Refactoring Meeting 2024-02-14
Ceph Developer Monthly 2024-02-07
Просмотров 1754 месяца назад
Ceph Developer Monthly 2024-02-07
Ceph RGW Refactoring Meeting 2024-01-31
Просмотров 884 месяца назад
Ceph RGW Refactoring Meeting 2024-01-31
Ceph RGW Refactoring Meeting 2024-01-24
Просмотров 865 месяцев назад
Ceph RGW Refactoring Meeting 2024-01-24
Ceph User + Dev Monthly 2023-11-16
Просмотров 1115 месяцев назад
Ceph User Dev Monthly 2023-11-16
Ceph User + Dev Monthly 2024-01-18
Просмотров 865 месяцев назад
Ceph User Dev Monthly 2024-01-18
Ceph Developer Monthly 2023-12-06
Просмотров 695 месяцев назад
Ceph Developer Monthly 2023-12-06
Ceph RGW Refactoring Meeting 2024-01-03
Просмотров 1195 месяцев назад
Ceph RGW Refactoring Meeting 2024-01-03
Ceph RGW Refactoring Meeting 2023-12-13
Просмотров 1156 месяцев назад
Ceph RGW Refactoring Meeting 2023-12-13
Ceph Developer Monthly 2023-12-06
Просмотров 986 месяцев назад
Ceph Developer Monthly 2023-12-06
Ceph RGW Refactoring Meeting 2023-11-29
Просмотров 736 месяцев назад
Ceph RGW Refactoring Meeting 2023-11-29

Комментарии

  • @spacewolfjr
    @spacewolfjr 3 дня назад

    magnifique!

  • @kelownatechkid
    @kelownatechkid 26 дней назад

    Make all the container logs available in the dashboard. Then users can go into the services and see what has happened

  • @yourjjrjjrjj
    @yourjjrjjrjj 29 дней назад

    Is 3 node ceph cluster(with let's say 5 OSDs per node) ok for production? Or is it significantly better to have 5 nodes with 3 OSD per node? I only care about the performance.

  • @seccentral
    @seccentral Месяц назад

    This numa optimization (at the CHIPLET level) reminds me of coders going above and beyond trying to do the compiler's job. Which is very cool for live streams and tech blogs and tech videos on youtube but it's a pain for actual programmers. So, just like the compiler is supposed to handle the lower level optimizations, so should the software handle the modern architectures properly and spare the sysadmin / storage admin from doing this manual bolt tighening. He's the driver, not the mechanic. I mean it's super cool, don't get me wrong, but it's the coding team's concern (whoever that is)

    • @amosgiture
      @amosgiture 9 дней назад

      Complex Performance Tuning can scare a systems admin into opting for a closed source solution but even those end up having crazy bugs that are only known by insiders

    • @seccentral
      @seccentral 9 дней назад

      @@amosgiture not really scared scared, but more like weighing the pros/cons and how much he's being paid cause that sort of decision comes down to tco and negotiated discounts / support packages at the management level. If the company relies 100% on it's own it department and does not buy support and it also has very competent sysadmins, the only reason those closed source solutions are interesting most of the time is just because they say so in a very fancy way and catch the executive's (non technical) eye and there's no code to prove the contrary. So unless we're talking dedicated solutions that include hardware and sla-s, open source is just as if not even more solid than commercial. Back on point, this low level tuning is a pain until you get it and afterwards you just automate away. But still, it is at the fundamental level *not* the sysadmin's job. Just like optimizing branch prediction is not a ts frontend dev's job, tuning for inter-chiplet numa is not the sysadmin's.

  • @seccentral
    @seccentral Месяц назад

    great tip on zfs aligned at 4M! At the beginning you referenced a talk about numa chiplets ? I'd love to watch that too.

  • @kelownatechkid
    @kelownatechkid Месяц назад

    Are the slides posted anywhere? The camera feed cuts them off in the video unfortunately.

  • @kelownatechkid
    @kelownatechkid Месяц назад

    It's great to see more initiatives going towards engaging with users and collecting feedback. Will be great to have support for airgapped clusters in the public boards for example, I think a lot of users keep their clusters secured off the internet but would love to share data

  • @callowaysutton
    @callowaysutton Месяц назад

    Now this is awesome

  • @kelownatechkid
    @kelownatechkid Месяц назад

    Great presentation! It's so exciting to see that CephFS is getting more dashboard functionality, that will be a real help with dealing with permissions

  • @DS-ou7xm
    @DS-ou7xm Месяц назад

    10 years no corruption..... simply poetry to my ears ....👍

  • @p7272
    @p7272 3 месяца назад

    I WANT IN!!!

  • @liucxchangxi
    @liucxchangxi 3 месяца назад

    where is the ppt?

  • @kuliserper
    @kuliserper 4 месяца назад

    Thank you for great presentation!

  • @techthis
    @techthis 4 месяца назад

    I found it very easy to deploy Ceph with rook in a k0s cluster!

  • @RobertGallop
    @RobertGallop 5 месяцев назад

    Can we get the link to the docs here as a pinned comment or in video description?

  • @george0hz3
    @george0hz3 5 месяцев назад

    反复多看以理解

  • @happy9955
    @happy9955 5 месяцев назад

    loooking forward to use ceph

  • @zap8014
    @zap8014 6 месяцев назад

    Thanks Neha.

  • @pot8778
    @pot8778 6 месяцев назад

    How we can put multisite replication in maintenance mode for OS patching?

  • @Bhaveshk
    @Bhaveshk 7 месяцев назад

    This video is so good that my professor straight up teaches this and these slides when explaining ceph! Thanks for helping me with finals :p

  • @varunjain3870
    @varunjain3870 9 месяцев назад

    I think Portworx is so much advanced in terms of features, functionality and security.

  • @asthabichha3950
    @asthabichha3950 9 месяцев назад

    Hi, I have kubernetes cluster ,on the basis namespaces created on it I want to isolate/segrate data .I am using ceph as storage. So will external ceph cluster be answer for it ?

  • @sohom004
    @sohom004 10 месяцев назад

    Good one :)

  • @marcofedo
    @marcofedo 10 месяцев назад

    Can you supply information on the servers you used for the high density configs in 2022?

  • @George-yh4vr
    @George-yh4vr 10 месяцев назад

    'promo sm'

  • @falazarte
    @falazarte 10 месяцев назад

    Amazing presentation. Although from.2019, still very relevant.

  • @TheMaxwellify
    @TheMaxwellify 11 месяцев назад

    At 17:40, the conclusion is CEPH should be used with Write Cache ON or OFF ? Thanks so much for all the insights !

  • @user-if4gm4dg3h
    @user-if4gm4dg3h 11 месяцев назад

    请问ppt的下载链接有吗?

  • @ulysses4536
    @ulysses4536 11 месяцев назад

    I'm from the future 👋🏼 Bulding this locally on a 16G mac machine with ccache be tween 3 and 20 minutes depending on the hit rate, though the numbers aren't final. And I also like to checkout the upstream repository first and then adding my fork. Then you have the community commits under the `origin/` namespace which kind of makes sense. Also, no matter what order you created your repos in or where you cloned from, you can always change the remote name wth `git remote rename old_name new_name`. That's another step, but if already there then that's the solution.

  • @Peoplevoice09
    @Peoplevoice09 11 месяцев назад

    THank you for the knowledge sharing !!!

  • @justinoleary911
    @justinoleary911 11 месяцев назад

    Why would rook offer no encryption support at the k8s layers pvc and pv layers if it’s designed for k8s in transit and at rest. This is hard to find only works in portworx

  • @robertsretrogaming
    @robertsretrogaming 11 месяцев назад

    Canonical. I'm out.

  • @jimallen8238
    @jimallen8238 11 месяцев назад

    Ceph is super interesting, but I lost any interest in this use case the moment he said Snaps and Canonical. Not interested in supporting the continued erosion of open source freedom and code transparency. Just say NO TO SNAPS.

  • @nasirmahmood7799
    @nasirmahmood7799 11 месяцев назад

    screen stuck at 13:20 ?

  • @nasirmahmood7799
    @nasirmahmood7799 11 месяцев назад

    can someone please guide or share the link to this training set?

  • @marcellogambetti9458
    @marcellogambetti9458 Год назад

    audio is terrible sorry

  • @isj-3227
    @isj-3227 Год назад

    great info but what's with the teeth sucking or lap smacking noises? Super distracting and gross.

  • @frzen
    @frzen Год назад

    Great fun talk

  • @seanrebn1092
    @seanrebn1092 Год назад

    Nice~

  • @enekolacunza112
    @enekolacunza112 Год назад

    Very nice to see this kind of presentation in a tech con. We usually extend the life of servers up to 9 years in mixed-age clusters, because their perform well enough for the customers and it has quite a financial impact. Your presentation made me realize how we're helping the environment too! Thanks!

  • @kelownatechkid
    @kelownatechkid Год назад

    Regarding 95% usage of a single osd - I have temporarily worked around high single osd usage by using 'ceph osd reweight-by-utilization' and 'osd reweight'.

  • @kelownatechkid
    @kelownatechkid Год назад

    Regarding forums: I think the subreddit (r/ceph) is probably the largest one, albeit unofficial.

  • @cxwshawn
    @cxwshawn Год назад

    perfect, where can i get slides ?

  • @kelownatechkid
    @kelownatechkid Год назад

    Thank you - great work and presentation

  • @nickway_
    @nickway_ Год назад

    This was very helpful. I really liked the ocd tennis coach analogy.

  • @kelownatechkid
    @kelownatechkid Год назад

    Great stuff. This is a great introduction into the future plans.

  • @TheExard3k
    @TheExard3k Год назад

    Thanks, Wido! I'm planning a small cluster consisting of Consumer CPUs that have clock speeds as high as 5.8GHz. I'll keep your observations in mind as QD=1 is very relevant to our needs.

  • @kelownatechkid
    @kelownatechkid Год назад

    Great to see more work being done on this!

  • @kelownatechkid
    @kelownatechkid Год назад

    Very cool, and potentially useful as network traffic grows rapidly

  • @kelownatechkid
    @kelownatechkid Год назад

    Really appreciate these talks. Invaluable to help plan for the future.