Index
            APIReader
¶
    
              Bases: BaseReader
Utility class for reading an API into a DataFrame.
This class uses an APIClient to fetch data from an API and load it into a Spark DataFrame.
Attributes:
| Name | Type | Description | 
|---|---|---|
| api_client | The client for making API requests. | 
Source code in src/cloe_nessy/integration/reader/api_reader.py
                | 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |  | 
            __init__(base_url, auth, default_headers=None)
¶
    Initializes the APIReader object.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| base_url | The base URL for the API. | required | |
| auth | AuthBase | None | The authentication method for the API. | required | 
| default_headers | dict[str, str] | None | Default headers to include in requests. | None | 
Source code in src/cloe_nessy/integration/reader/api_reader.py
              
            _add_metadata_column(df, response)
¶
    Adds a metadata column to a DataFrame.
This method appends a column named __metadata to the given DataFrame, containing a map
of metadata related to an API response. The metadata includes the current timestamp,
the base URL of the API, the URL of the request, the HTTP status code, the reason phrase,
and the elapsed time of the request in seconds.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| df | DataFrame | The DataFrame to which the metadata column will be added. | required | 
| response | APIResponse | The API response object containing the metadata to be added. | required | 
Returns:
| Name | Type | Description | 
|---|---|---|
| DataFrame | The original DataFrame with an added  | 
Source code in src/cloe_nessy/integration/reader/api_reader.py
              
            read(*, endpoint='', method='GET', key=None, timeout=30, params=None, headers=None, data=None, json_body=None, max_retries=0, options=None, add_metadata_column=False, **kwargs)
¶
    Reads data from an API endpoint and returns it as a DataFrame.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| endpoint | str | The endpoint to send the request to. | '' | 
| method | str | The HTTP method to use for the request. | 'GET' | 
| key | str | None | The key to extract from the JSON response. | None | 
| timeout | int | The timeout for the request in seconds. | 30 | 
| params | dict[str, str] | None | The query parameters for the request. | None | 
| headers | dict[str, str] | None | The headers to include in the request. | None | 
| data | dict[str, str] | None | The form data to include in the request. | None | 
| json_body | dict[str, str] | None | The JSON data to include in the request. | None | 
| max_retries | int | The maximum number of retries for the request. | 0 | 
| options | dict[str, str] | None | Additional options for the createDataFrame function. | None | 
| add_metadata_column | bool | If set, adds a __metadata column containing metadata about the API response. | False | 
| **kwargs | Any | Additional keyword arguments to maintain compatibility with the base class method. | {} | 
Returns:
| Name | Type | Description | 
|---|---|---|
| DataFrame | DataFrame | The Spark DataFrame containing the read data in the json_object column. | 
Raises:
| Type | Description | 
|---|---|
| RuntimeError | If there is an error with the API request or reading the data. | 
Source code in src/cloe_nessy/integration/reader/api_reader.py
              
            CatalogReader
¶
    
              Bases: BaseReader
A reader for Unity Catalog objects.
This class reads data from a Unity Catalog table and loads it into a Spark DataFrame.
Source code in src/cloe_nessy/integration/reader/catalog_reader.py
                
            __init__()
¶
    
            read(table_identifier='', *, options=None, delta_load_options=None, **kwargs)
¶
    Reads a table from the Unity Catalog.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| table_identifier | str | The table identifier in the Unity Catalog in the format 'catalog.schema.table'. | '' | 
| options | dict[str, str] | None | PySpark options for the read table operation. | None | 
| delta_load_options | DeltaLoadOptions | None | Options for delta loading, if applicable. When provided, uses delta loader instead of regular table read to perform incremental loading. | None | 
| **kwargs | Any | Additional keyword arguments to maintain compatibility with the base class method. | {} | 
Returns:
| Type | Description | 
|---|---|
| DataFrame | The Spark DataFrame containing the read data. | 
Raises:
| Type | Description | 
|---|---|
| ValueError | If the table_identifier is not provided, is not a string, or is not in the correct format. | 
| ReadOperationFailedError | For delta load or table read failures. | 
Source code in src/cloe_nessy/integration/reader/catalog_reader.py
              
            ExcelDataFrameReader
¶
    
              Bases: BaseReader
Utility class for reading an Excel file into a DataFrame.
This class uses the Pandas API on Spark to read Excel files to a DataFrame. More information can be found in the official documentation.
Source code in src/cloe_nessy/integration/reader/excel_reader.py
                | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |  | 
            __init__()
¶
    
            _add_metadata_column(df, location, sheet_name)
¶
    Adds a metadata column to a DataFrame.
This method appends a column named __metadata to the given DataFrame, containing a map
of metadata related to the Excel file read operation. The metadata includes the current
timestamp, the location of the Excel file, and the sheet name(s) from which the data was read.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| df | DataFrame | The DataFrame to which the metadata column will be added. | required | 
| location | str | The file path of the Excel file. | required | 
| sheet_name | str | int | list | The sheet name or sheet index used when reading the Excel file. | required | 
Returns:
| Name | Type | Description | 
|---|---|---|
| DataFrame | The original DataFrame with an added  | 
Source code in src/cloe_nessy/integration/reader/excel_reader.py
              
            read(location, *, sheet_name=0, header=0, index_col=None, usecols=None, true_values=None, false_values=None, nrows=None, na_values=None, keep_default_na=True, parse_dates=False, date_parser=None, thousands=None, options=None, load_as_strings=False, add_metadata_column=False, **kwargs)
¶
    Reads Excel file on specified location and returns DataFrame.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| location | str | Location of files to read. | required | 
| sheet_name | str | int | list | Strings are used for sheet names. Integers are used in zero-indexed sheet positions. Lists of strings/integers are used to request multiple sheets. Specify None to get all sheets. | 0 | 
| header | int | list[int] | Row to use for column labels. If a list of integers is passed those row positions will be combined. Use None if there is no header. | 0 | 
| index_col | int | list[int] | None | Column to use as the row labels of the DataFrame. Pass None if there is no such column. If a list is passed, those columns will be combined. | None | 
| usecols | int | str | list | Callable | None | Return a subset of the columns. If None, then parse all columns. If str, then indicates comma separated list of Excel column letters and column ranges (e.g. “A:E” or “A,C,E:F”). Ranges are inclusive of both sides. nIf list of int, then indicates list of column numbers to be parsed. If list of string, then indicates list of column names to be parsed. If Callable, then evaluate each column name against it and parse the column if the Callable returns True. | None | 
| true_values | list | None | Values to consider as True. | None | 
| false_values | list | None | Values to consider as False. | None | 
| nrows | int | None | Number of rows to parse. | None | 
| na_values | list[str] | dict[str, list[str]] | None | Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. | None | 
| keep_default_na | bool | If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they're appended to. | True | 
| parse_dates | bool | list | dict | The behavior is as follows: - bool. If True -> try parsing the index. - list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. - list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. - dict, e.g. {{"foo" : [1, 3]}} -> parse columns 1, 3 as date and call result "foo" If a column or index contains an unparseable date, the entire column or index will be returned unaltered as an object data type. | False | 
| date_parser | Callable | None | Function to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. | None | 
| thousands | str | None | Thousands separator for parsing string columns to numeric. Note that this parameter is only necessary for columns stored as TEXT in Excel, any numeric columns will automatically be parsed, regardless of display format. | None | 
| options | dict | None | Optional keyword arguments passed to pyspark.pandas.read_excel and handed to TextFileReader. | None | 
| load_as_strings | bool | If True, converts all columns to string type to avoid datatype conversion errors in Spark. | False | 
| add_metadata_column | bool | If True, adds a metadata column containing the file location and sheet name. | False | 
| **kwargs | Any | Additional keyword arguments to maintain compatibility with the base class method. | {} | 
Source code in src/cloe_nessy/integration/reader/excel_reader.py
              | 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |  | 
            FileReader
¶
    
              Bases: BaseReader
Utility class for reading a file into a DataFrame.
This class reads data from files and loads it into a Spark DataFrame.
Source code in src/cloe_nessy/integration/reader/file_reader.py
                | 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |  | 
            __init__()
¶
    
            _add_metadata_column(df)
¶
    Add all metadata columns to the DataFrame.
Source code in src/cloe_nessy/integration/reader/file_reader.py
              
            _get_reader()
¶
    
            _get_stream_reader()
¶
    
            read(location, *, spark_format=None, extension=None, schema=None, search_subdirs=True, options=None, add_metadata_column=False, delta_load_options=None, **kwargs)
¶
    Reads files from a specified location and returns a DataFrame.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| location | str | Location of files to read. | required | 
| spark_format | str | None | Format of files to read. If not provided, it will be inferred from the extension. | None | 
| extension | str | None | File extension (csv, json, parquet, txt). Used if spark_format is not provided. | None | 
| schema | str | None | Schema of the file. If None, schema will be inferred. | None | 
| search_subdirs | bool | Whether to include files in subdirectories. | True | 
| options | dict | None | Spark DataFrame reader options. | None | 
| add_metadata_column | bool | Whether to include __metadata column in the DataFrame. | False | 
| delta_load_options | DeltaLoadOptions | None | Options for delta loading, if applicable. When provided and spark_format is 'delta', uses delta loader for incremental loading of Delta Lake tables. | None | 
| **kwargs | Any | Additional keyword arguments to maintain compatibility with the base class method. | {} | 
Raises:
| Type | Description | 
|---|---|
| ValueError | If neither spark_format nor extension is provided. | 
| ValueError | If the provided extension is not supported. | 
| Exception | If there is an error while reading the files. | 
Note
- The spark_formatparameter is used to specify the format of the files to be read.
- If spark_formatis not provided, the method will try to infer it from theextension.
- The extensionparameter is used to specify the file extension (e.g., 'csv', 'json', etc.).
- If both spark_formatandextensionare provided,spark_formatwill take precedence.
- The method will raise an error if neither spark_formatnorextensionis provided.
Returns:
| Type | Description | 
|---|---|
| DataFrame | A DataFrame containing the data from the files. | 
Source code in src/cloe_nessy/integration/reader/file_reader.py
              | 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |  | 
            read_stream(location='', schema=None, format='delta', add_metadata_column=False, options=None, **_)
¶
    Reads specified location as a stream and returns streaming DataFrame.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| location | Location of files to read. | '' | |
| format | str | Format of files to read. | 'delta' | 
| schema | StructType | str | None | Schema of the file. | None | 
| add_metadata_column | bool | Whether to include __metadata column in the DataFrame. | False | 
| options | dict[str, Any] | None | Spark DataFrame reader options. | None | 
Raises:
| Type | Description | 
|---|---|
| ValueError | If location is not provided. | 
Returns:
| Type | Description | 
|---|---|
| DataFrame | A Streaming DataFrame |