This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Script for Processing and Aggregating Track Data to CSV ====== This Python script is designed to traverse a directory structure to find files with a '.track' extension, read their contents using a specialized function (`read_trk`), and aggregate specific information from these files into a single CSV file named 'lastModels.csv'. It specifically extracts the last entry from each track file, adds the model name derived from the file's name, and then compiles these entries into a DataFrame. This aggregated data is then saved in the specified directory. The script also counts and reports the number of track files that were skipped due to size constraints. <code python> import os from pysep.io.trk import read_trk import pandas as pd from tqdm import tqdm import sys trks = list() for root, dirs, files in os.walk(sys.argv[1]): trkFiles = list(filter(lambda x: x.endswith(".track"), files)) if trkFiles: trkFile = trkFiles[0] trks.append(os.path.join(root, trkFile)) temp, meta = read_trk(trks[0]) names = [x for x in temp[0].columns] names.append("MODELName") lastModels = pd.DataFrame(columns=names) skipped = 0 for trk in tqdm(trks): modelName = '.'.join(os.path.basename(trk).split('.')[:2]) if not os.stat(trk).st_size < 1e4: trkContnent, metaContnent = read_trk(trk) trkContnent = trkContnent[0] lastModel = trkContnent.iloc[-1] lastModel["MODELName"] = modelName lastModels = lastModels.append(lastModel) else: skipped += 1 lastModels.reset_index().to_csv(os.path.join(sys.argv[1], "lastModels.csv"), index=False) print(f"Skipped {skipped} track files") </code>