When attempting to read documents in batches using cursorMark and rows parameters with the above code sample
val solrDF = spark.read .format("solr") .option("zkHost", zookeeperHosts) .option("collection", collectionName) .option("query", configuredSolrQuery) .option("rows", batchSize) .option("cursorMark", cursorMark) .option("wt" , "json") .option("sort","MSG_REF_UK_ID asc") .load()
the returned solrDF doens't contain the nextCursorMark returned by solr in the response which doesn't allow using this to load the data from solr in batches