58.0.1: Remove all keys that can be moved back to respective GMS
58.0.0: Revert "Reverting the commit range: f0c894b490d3df047837cf2fb7b9911c86188cae..4b5f31ed8844f818d7db0880d30c8dc8c7ac0087."
57.0.16: Reverting the commit range: f0c894b490d3df047837cf2fb7b9911c86188cae..4b5f31ed8844f818d7db0880d30c8dc8c7ac0087.
57.0.15: Disable filtering removed entities in browse until META-10900 is solved
57.0.14: (resubmit) add graph index builder for ai-metadata entities and relationships
57.0.13: Reverting the commit range: 830e63b4b40cf701db216952c34d731a7a82ea1d..4255871452062c2fd14651cb4fffb7d337bad300.
57.0.12: add graph index builder for ai-metadata entities and relationships
57.0.11: Fix bug which sets removed field to always true while building DatasetDocument
57.0.10: Change p12 file name to new ina group name
57.0.9: Add removal field in field compliance to flag the proposal as removal or not.
57.0.8: Adding action Builder for DatasetInstance entity
57.0.7: Adding GMA entities and relations for GridWorkflow and GridWorkflowExecution
57.0.6: Adding dataType and dataClassification to the search document
57.0.5: Rename graph entity MlTrainedModel to MlTrainedModelEntity
57.0.4: Code to form the FollowedBy Graph based on the Follow Aspect
57.0.3: add graph entity and relationship models for ai-metadata
57.0.2: Refactor incorrect use of mock in variable names
57.0.1: Add support for <, <=, >, >= conditions for the filter API
57.0.0: Update Conditions model for <, <=, >, >= conditions
56.0.5: update version of pegasus metadata plugin
56.0.4: update container dependency
56.0.3: Move mlFeatures from SnapshotRequestBuilders to ActionRequestbuilder
56.0.2: Adding reserved versions aspect
56.0.1: Create search filter for compliance pending review proposal.
56.0.0: Add Likes aspect resource in metadata restli utils
55.0.6: Fix a bug with getAll API
55.0.5: Move applicable metadata-store SnapshotRequestBuilders to ActionRequestbuilder
55.0.4: EspressoDAO: Updated to expect a separator between entityType and aspectName for config mapping keys
55.0.3: Added EspressoRecordSerializer and EspressoDAOUtils
55.0.2: Rewrote EspressoLocalDAOTest with a mocked EspressoAccessor
55.0.1: Migrate metric-gms SnapshotRequestBuilders to ActionRequestBuilder
55.0.0: [Wormhole] Deprecate Holdem-centric locations in favor of the more general CORP locations, which contain Holdem.
54.0.1: Migrate job-gms SnapshotRequestBuilders to ActionRequestBuilder
wherehows-samza 1.0.56 -> 1.0.56:
1.0.56: Gradle5 migration
MP_VERSION=metadata-models:58.0.1
MP_VERSION=wherehows-samza:1.0.56
This commit is automatically generated by li-opensource tool.
DataHub: A Generalized Metadata Search & Discovery Tool
📣 Next DataHub town hall meeting on April 3rd, 9am-10am PDT:
- Signup sheet & questions
- Details and recordings of past meetings can be found here
✨Mar 2020 Update:
- DataHub v0.3.1 has just been released. See relase notes for more details.
- We're on Slack now! Ask questions and keep up with the latest announcement.
Introduction
DataHub is LinkedIn's generalized metadata search & discovery tool. To learn more about DataHub, check out our LinkedIn blog post and Strata presentation. You should also visit DataHub Architecture to get a better understanding of how DataHub is implemented and DataHub Onboarding Guide to understand how to extend DataHub for your own use case.
This repository contains the complete source code for both DataHub's frontend & backend. You can also read about how we sync the changes between our the internal fork and GitHub.
Quickstart
- Install docker and docker-compose (if using Linux). Make sure to allocate enough hardware resources for Docker engine. Tested & confirmed config: 2 CPUs, 8GB RAM, 2GB Swap area.
- Open Docker either from the command line or the desktop app and ensure it is up and running.
- Clone this repo and
cdinto the root directory of the cloned repository. - Run the following command to download and run all Docker containers locally:
This step takes a while to run the first time, and it may be difficult to tell if DataHub is fully up and running from the combined log. Please use this guide to verify that each container is running correctly.cd docker/quickstart && source ./quickstart.sh - At this point, you should be able to start DataHub by opening http://localhost:9001 in your browser. You can sign in using
datahubas both username and password. However, you'll notice that no data has been ingested yet. - To ingest provided sample data to DataHub, switch to a new terminal window,
cdinto the cloneddatahubrepo, and run the following command:
After running this, you should be able to see and search sample datasets in DataHub.docker build -t ingestion -f docker/ingestion/Dockerfile . && cd docker/ingestion && docker-compose up
Please refer to the debugging guide if you encounter any issues during the quickstart.
Documentation
- DataHub Developer's Guide
- DataHub Architecture
- DataHub Onboarding Guide
- Docker Images
- Frontend
- Web App
- Generalized Metadata Service
- Metadata Ingestion
- Metadata Processing Jobs
Releases
See Releases page for more details. We follow the SemVer Specification when versioning the releases and adopt the Keep a Changelog convention for the changelog format.
FAQs
Frequently Asked Questions about DataHub can be found here.
Features & Roadmap
Check out DataHub's Features & Roadmap.
Contributing
We welcome contributions from the community. Please refer to our Contributing Guidelines for more details. We also have a contrib directory for incubating experimental features.
Community
Join our slack workspace for important discussions and announcements. You can also find out more about our past and upcoming town hall meetings.
Related Articles & Presentations
- DataHub: A Generalized Metadata Search & Discovery Tool
- Open sourcing DataHub: LinkedIn’s metadata search and discovery platform
- The evolution of metadata: LinkedIn’s story @ Strata Data Conference 2019
- Journey of metadata at LinkedIn @ Crunch Data Conference 2019
- Data Catalogue — Knowing your data
- How LinkedIn, Uber, Lyft, Airbnb and Netflix are Solving Data Management and Discovery for Machine Learning Solutions
- LinkedIn元数据之旅的最新进展—Data Hub
- 数据治理篇: 元数据之datahub-概述
- LinkedIn gibt die Datenplattform DataHub als Open Source frei
- Linkedin bringt Open-Source-Datahub
