Skip to content

Commit 2f0c83d

Browse files
committed
spelling: that
Signed-off-by: Josh Soref <[email protected]>
1 parent 0005351 commit 2f0c83d

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

_sass/vendors/neat/grid/_span-columns.scss

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
/// @param {List} $span
66
/// A list containing `$columns`, the unitless number of columns the element spans (required), and `$container-columns`, the number of columns the parent element spans (optional).
77
///
8-
/// If only one value is passed, it is assumed that it's `$columns` and that that `$container-columns` is equal to `$grid-columns`, the total number of columns in the grid.
8+
/// If only one value is passed, it is assumed that it's `$columns` and that `$container-columns` is equal to `$grid-columns`, the total number of columns in the grid.
99
///
1010
/// The values can be separated with any string such as `of`, `/`, etc.
1111
///

gsoc/2017.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -338,7 +338,7 @@ Supervised by [@olafurpg](https://github.com/olafurpg)
338338

339339
### Implementing a Benchmark Suite for Big Data and Machine Learning Applications
340340

341-
In this project, the aim is to design and implement several larger Big Data and Machine Learning applications, which will be used to regularly test the performance of the Scala compiler releases, and to test the performance of JVMs running these applications. By the time the project is completed, you are expected to implement one larger data-intensive application at least for these frameworks: [Spark](https://spark.apache.org/), [Flink](https://flink.apache.org/), [Storm](https://storm.apache.org/), [Kafka](https://kafka.apache.org/) and [DeepLearning4j](https://github.com/deeplearning4j). Each of the applications will have to be accompanied with a dataset used to run the application.
341+
In this project, the aim is to design and implement several larger Big Data and Machine Learning applications, which will be used to regularly test the performance of the Scala compiler releases, and to test the performance of JVMs running these applications. By the time that the project is completed, you are expected to implement one larger data-intensive application at least for these frameworks: [Spark](https://spark.apache.org/), [Flink](https://flink.apache.org/), [Storm](https://storm.apache.org/), [Kafka](https://kafka.apache.org/) and [DeepLearning4j](https://github.com/deeplearning4j). Each of the applications will have to be accompanied with a dataset used to run the application.
342342

343343
This project is an excellent opportunity to familiarize yourself with these modern cutting-edge frameworks for distributed computing and machine learning!
344344

0 commit comments

Comments
 (0)