Stage Id ▾ | Pool Name | Description | Submitted | Duration | Tasks: Succeeded/Total | Input | Output | Shuffle Read | Shuffle Write |
---|---|---|---|---|---|---|---|---|---|
513138 | default | toStream at SparkDataStreamBuilder.scala:39 scala.collection.AbstractIterator.toStream(Iterator.scala:1431) plusamp.middleware.model.core.data.SparkDataStreamBuilder.$anonfun$stream$1(SparkDataStreamBuilder.scala:39) plusamp.scala.util.Profile$.time(Profile.scala:22) plusamp.middleware.model.core.data.SparkDataStreamBuilder.<init>(SparkDataStreamBuilder.scala:39) plusamp.middleware.graphql.datafile.SparkAccessor.$anonfun$retrieveData$3(SparkAccessor.scala:77) scala.util.Success.$anonfun$map$1(Try.scala:255) scala.util.Success.map(Try.scala:213) scala.concurrent.Future.$anonfun$map$1(Future.scala:292) scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33) scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33) scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) java.base/java.lang.Thread.run(Thread.java:829) | 2025/08/02 15:26:11 | 10 ms |
1/1
| 1846.0 B | |||
513137 | default | toLocalIterator at SparkDataStreamBuilder.scala:39
RDD: *(2) Sort [year#94399378 ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(year#94399378 ASC NULLS FIRST, 200), ENSURE_REQUIREMENTS, [id=#7537025]
+- *(1) Project [year#94399378, turnover#94399453, (1.0 / cast(turnover#94399453 as double)) AS days_hold#94399494]
+- *(1) Filter isnotnull(turnover#94399453)
+- *(1) ColumnarToRow
+- InMemoryTableScan [turnover#94399453, year#94399378], [isnotnull(turnover#94399453)]
+- InMemoryRelation [year#94399378, retIC#94399379, resretIC#94399413, numcos#94399414, numdates#94399415, annual_bmret#94399416, annual_ret#94399418, std_ret#94399420, Sharpe_ret#94399422, PctPos_ret#94399423, TR_ret#94399426, IR_ret#94399427, annual_resret#94399430, std_resret#94399432, Sharpe_resret#94399434, PctPos_resret#94399436, TR_resret#94399437, IR_resret#94399439, annual_retnet#94399442, std_retnet#94399443, Sharpe_retnet#94399446, PctPos_retnet#94399447, TR_retnet#94399450, IR_retnet#94399451, turnover#94399453], StorageLevel(disk, ...
org.apache.spark.sql.Dataset.toLocalIterator(Dataset.scala:3000) plusamp.middleware.model.core.data.SparkDataStreamBuilder.$anonfun$stream$1(SparkDataStreamBuilder.scala:39) plusamp.scala.util.Profile$.time(Profile.scala:22) plusamp.middleware.model.core.data.SparkDataStreamBuilder.<init>(SparkDataStreamBuilder.scala:39) plusamp.middleware.graphql.datafile.SparkAccessor.$anonfun$retrieveData$3(SparkAccessor.scala:77) scala.util.Success.$anonfun$map$1(Try.scala:255) scala.util.Success.map(Try.scala:213) scala.concurrent.Future.$anonfun$map$1(Future.scala:292) scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33) scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33) scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) java.base/java.lang.Thread.run(Thread.java:829) | 2025/08/02 15:26:11 | 69 ms |
26/26
| 11.8 KiB | 1846.0 B |
Stage Id ▾ | Pool Name | Description | Submitted | Duration | Tasks: Succeeded/Total | Input | Output | Shuffle Read | Shuffle Write |
---|---|---|---|---|---|---|---|---|---|
513136 | default | toLocalIterator at SparkDataStreamBuilder.scala:39
RDD: *(1) Project [CASE WHEN ((year#94399178 = NA) OR (year#94399178 = null)) THEN null ELSE cast(year#94399178 as int) END AS year#94399378, CASE WHEN ((retIC#94399179 = NA) OR (retIC#94399179 = null)) THEN null ELSE cast(retIC#94399179 as float) END AS retIC#94399379, CASE WHEN ((resretIC#94399180 = NA) OR (resretIC#94399180 = null)) THEN null ELSE cast(resretIC#94399180 as float) END AS resretIC#94399413, CASE WHEN ((numcos#94399181 = NA) OR (numcos#94399181 = null)) THEN null ELSE cast(numcos#94399181 as float) END AS numcos#94399414, CASE WHEN ((numdates#94399182 = NA) OR (numdates#94399182 = null)) THEN null ELSE cast(numdates#94399182 as int) END AS numdates#94399415, CASE WHEN ((annual_bmret#94399183 = NA) OR (annual_bmret#94399183 = null)) THEN null ELSE cast(annual_bmret#94399183 as float) END AS annual_bmret#94399416, CASE WHEN ((annual_ret#94399184 = NA) OR (annual_ret#94399184 = null)) THEN null ELSE cast(annual_ret#94399184 as float) END AS annual_ret#94399418, CASE WHEN ((std_ret#94399185 = NA) O...
org.apache.spark.sql.Dataset.toLocalIterator(Dataset.scala:3000) plusamp.middleware.model.core.data.SparkDataStreamBuilder.$anonfun$stream$1(SparkDataStreamBuilder.scala:39) plusamp.scala.util.Profile$.time(Profile.scala:22) plusamp.middleware.model.core.data.SparkDataStreamBuilder.<init>(SparkDataStreamBuilder.scala:39) plusamp.middleware.graphql.datafile.SparkAccessor.$anonfun$retrieveData$3(SparkAccessor.scala:77) scala.util.Success.$anonfun$map$1(Try.scala:255) scala.util.Success.map(Try.scala:213) scala.concurrent.Future.$anonfun$map$1(Future.scala:292) scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33) scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33) scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426) java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) | Unknown | Unknown |
0/1
|