chore(ingest): update doc & log detail (#10139)

This commit is contained in:
Huanjie Guo 2024-03-28 04:39:06 +08:00 committed by GitHub
parent 2e8936dd20
commit 654d991753
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 9 additions and 7 deletions

View File

@ -107,9 +107,9 @@ ingestion recipe is producing the desired metadata events before ingesting them
```shell
# Dry run
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml --dry-run
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yaml --dry-run
# Short-form
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml -n
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yaml -n
```
#### ingest --preview
@ -119,16 +119,16 @@ This option helps with quick end-to-end smoke testing of the ingestion recipe.
```shell
# Preview
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml --preview
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yaml --preview
# Preview with dry-run
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml -n --preview
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yaml -n --preview
```
By default `--preview` creates 10 workunits. But if you wish to try producing more workunits you can use another option `--preview-workunits`
```shell
# Preview 20 workunits without sending anything to sink
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml -n --preview --preview-workunits=20
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yaml -n --preview --preview-workunits=20
```
#### ingest deploy

View File

@ -149,7 +149,9 @@ def load_config_file(
) from e
else:
if not config_file_path.is_file():
raise ConfigurationError(f"Cannot open config file {config_file_path}")
raise ConfigurationError(
f"Cannot open config file {config_file_path.resolve()}"
)
raw_config_file = config_file_path.read_text()
config_fp = io.StringIO(raw_config_file)

View File

@ -32,7 +32,7 @@ class PartitionExecutor(Closeable):
It works similarly to a ThreadPoolExecutor, with the following changes:
- At most one request per partition key will be executing at a time.
- If the number of pending requests exceeds the threshold, the submit call
- If the number of pending requests exceeds the threshold, the submit() call
will block until the number of pending requests drops below the threshold.
Due to the interaction between max_workers and max_pending, it is possible