excel_reader
            ExcelDataFrameReader
¶
    
              Bases: BaseReader
Utility class for reading an Excel file into a DataFrame.
This class uses the Pandas API on Spark to read Excel files to a DataFrame. More information can be found in the official documentation.
Source code in src/cloe_nessy/integration/reader/excel_reader.py
                | 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |  | 
            __init__()
¶
    
            _add_metadata_column(df, location, sheet_name)
¶
    Adds a metadata column to a DataFrame.
This method appends a column named __metadata to the given DataFrame, containing a map
of metadata related to the Excel file read operation. The metadata includes the current
timestamp, the location of the Excel file, and the sheet name(s) from which the data was read.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| df | DataFrame | The DataFrame to which the metadata column will be added. | required | 
| location | str | The file path of the Excel file. | required | 
| sheet_name | str | int | list | The sheet name or sheet index used when reading the Excel file. | required | 
Returns:
| Name | Type | Description | 
|---|---|---|
| DataFrame | The original DataFrame with an added  | 
Source code in src/cloe_nessy/integration/reader/excel_reader.py
              
            read(location, *, sheet_name=0, header=0, index_col=None, usecols=None, true_values=None, false_values=None, nrows=None, na_values=None, keep_default_na=True, parse_dates=False, date_parser=None, thousands=None, options=None, load_as_strings=False, add_metadata_column=False, **kwargs)
¶
    Reads Excel file on specified location and returns DataFrame.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| location | str | Location of files to read. | required | 
| sheet_name | str | int | list | Strings are used for sheet names. Integers are used in zero-indexed sheet positions. Lists of strings/integers are used to request multiple sheets. Specify None to get all sheets. | 0 | 
| header | int | list[int] | Row to use for column labels. If a list of integers is passed those row positions will be combined. Use None if there is no header. | 0 | 
| index_col | int | list[int] | None | Column to use as the row labels of the DataFrame. Pass None if there is no such column. If a list is passed, those columns will be combined. | None | 
| usecols | int | str | list | Callable | None | Return a subset of the columns. If None, then parse all columns. If str, then indicates comma separated list of Excel column letters and column ranges (e.g. “A:E” or “A,C,E:F”). Ranges are inclusive of both sides. nIf list of int, then indicates list of column numbers to be parsed. If list of string, then indicates list of column names to be parsed. If Callable, then evaluate each column name against it and parse the column if the Callable returns True. | None | 
| true_values | list | None | Values to consider as True. | None | 
| false_values | list | None | Values to consider as False. | None | 
| nrows | int | None | Number of rows to parse. | None | 
| na_values | list[str] | dict[str, list[str]] | None | Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. | None | 
| keep_default_na | bool | If na_values are specified and keep_default_na is False the default NaN values are overridden, otherwise they're appended to. | True | 
| parse_dates | bool | list | dict | The behavior is as follows: - bool. If True -> try parsing the index. - list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. - list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. - dict, e.g. {{"foo" : [1, 3]}} -> parse columns 1, 3 as date and call result "foo" If a column or index contains an unparseable date, the entire column or index will be returned unaltered as an object data type. | False | 
| date_parser | Callable | None | Function to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. | None | 
| thousands | str | None | Thousands separator for parsing string columns to numeric. Note that this parameter is only necessary for columns stored as TEXT in Excel, any numeric columns will automatically be parsed, regardless of display format. | None | 
| options | dict | None | Optional keyword arguments passed to pyspark.pandas.read_excel and handed to TextFileReader. | None | 
| load_as_strings | bool | If True, converts all columns to string type to avoid datatype conversion errors in Spark. | False | 
| add_metadata_column | bool | If True, adds a metadata column containing the file location and sheet name. | False | 
| **kwargs | Any | Additional keyword arguments to maintain compatibility with the base class method. | {} | 
Source code in src/cloe_nessy/integration/reader/excel_reader.py
              | 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |  |