File spark.changes of Package spark

-------------------------------------------------------------------
Thu Apr  4 18:39:00 UTC 2019 - jodavis@suse.com

- Add fix-spark-home-and-conf.patch
  The patch fixes SPARK_HOME and SPARK_CONF_DIR in the different
  bin/spark-* scripts to call find-spark-home.

-------------------------------------------------------------------
Tue Mar 12 01:56:20 UTC 2019 - jodavis@suse.com

- Add metrics-core-2.2.0.jar and kafka_2.10-0.8.2.1.jar to dist/jars 
- Note these version changes must match up with the spark-kit 
  package, and that upstream OpenStack Monasca Transform is using
  Scala 2.10 and Kafka 0.8.

-------------------------------------------------------------------
Mon Mar 11 20:49:22 UTC 2019 - jodavis@suse.com

- Changed spark-streaming-kafka to 0.8-2.10
  and changed copied file name to include versions

-------------------------------------------------------------------
Thu Mar  7 22:44:18 UTC 2019 - jodavis@suse.com

- Changed Scala version in .jar filename to 2.10 to match build.sh

-------------------------------------------------------------------
Thu Mar  7 17:00:14 UTC 2019 - Johannes Grassler <johannes.grassler@suse.com>

- Modified build.sh to build against Scala 2.10 

-------------------------------------------------------------------
Mon Feb 18 15:18:18 UTC 2019 - Johannes Grassler <johannes.grassler@suse.com>

- Build with -Phive and -Phive-thriftserver
- Replace upstream fix-spark-home by simplified one of our own
- Fix path for jars in service files

-------------------------------------------------------------------
Fri Feb 15 14:50:09 UTC 2019 - Johannes Grassler <johannes.grassler@suse.com>

- Update to version 2.2.3
  * [SPARK-26327] - Metrics in FileSourceScanExec not update correctly while
                    relation.partitionSchema is set
  * [SPARK-21402] - Fix java array of structs deserialization
  * [SPARK-22951] - count() after dropDuplicates() on emptyDataFrame returns
                    incorrect value
  * [SPARK-23207] - Shuffle+Repartition on an DataFrame could lead to incorrect
  *                 answers
  * [SPARK-23243] - Shuffle+Repartition on an RDD could lead to incorrect
                    answers
  * [SPARK-24603] - Typo in comments
  * [SPARK-24677] - TaskSetManager not updating successfulTaskDurations for old
                    stage attempts
  * [SPARK-24809] - Serializing LongHashedRelation in executor may result in
                    data error
  * [SPARK-24813] - HiveExternalCatalogVersionsSuite still flaky; fall back to
                    Apache archive
  * [SPARK-24927] - The hadoop-provided profile doesn't play well with
                    Snappy-compressed Parquet files
  * [SPARK-24948] - SHS filters wrongly some applications due to permission
                    check
  * [SPARK-24950] - scala DateTimeUtilsSuite daysToMillis and millisToDays
                    fails w/java 8 181-b13
  * [SPARK-24957] - Decimal arithmetic can lead to wrong values using codegen
  * [SPARK-25081] - Nested spill in ShuffleExternalSorter may access a released
                    memory page
  * [SPARK-25114] - RecordBinaryComparator may return wrong result when
                    subtraction between two words is divisible by
                    Integer.MAX_VALUE
  * [SPARK-25144] - distinct on Dataset leads to exception due to Managed
                    memory leak detected
  * [SPARK-25164] - Parquet reader builds entire list of columns once for each
                    column
  * [SPARK-25402] - Null handling in BooleanSimplification
  * [SPARK-25568] - Continue to update the remaining accumulators when failing
                    to update one accumulator
  * [SPARK-25591] - PySpark Accumulators with multiple PythonUDFs
  * [SPARK-25714] - Null Handling in the Optimizer rule BooleanSimplification
  * [SPARK-25726] - Flaky test: SaveIntoDataSourceCommandSuite.`simpleString is
                    redacted`
  * [SPARK-25797] - Views created via 2.1 cannot be read via 2.2+
  * [SPARK-25854] - mvn helper script always exits w/1, causing mvn builds to
                    fail
  * [SPARK-26233] - Incorrect decimal value with java beans and
                    first/last/max... functions
  * [SPARK-26537] - update the release scripts to point to gitbox
  * [SPARK-26545] - Fix typo in EqualNullSafe's truth table comment
  * [SPARK-26553] - NameError: global name '_exception_message' is not defined
  * [SPARK-26802] - CVE-2018-11760: Apache Spark local privilege escalation
                    vulnerability
  * [SPARK-26118] - Make Jetty's requestHeaderSize configurable in Spark
  * [SPARK-20715] - MapStatuses shouldn't be redundantly stored in both
                    ShuffleMapStage and MapOutputTracker
  * [SPARK-25253] - Refactor pyspark connection & authentication
  * [SPARK-25576] - Fix lint failure in 2.2
  * [SPARK-24564] - Add test suite for RecordBinaryComparator
- Add _service
- Drop fix-spark-home-and-conf.patch (no longer needed since all scripts use
  find-spark-home now)
- Adjust build.sh to account for automatic Hadoop version and
  new Kafka version
- Address various packaging deficiencies (bsc#1081531):
  * Remove configuration templates from /usr/share/spark
  * Fix static versioning
  * Get rid of wildcards in %files section
  * Improve Summary

-------------------------------------------------------------------
Sat Feb  9 01:17:58 UTC 2019 - ashwin.agate@suse.com

- Added Restart and RestartSec to restart 
  spark master and spark worker (bsc#1091479) 

-------------------------------------------------------------------
Wed Mar 21 16:59:20 UTC 2018 - ashwin.agate@suse.com

- Remove drizzle jdbc jar (bsc#1084084)

-------------------------------------------------------------------
Thu Mar  8 11:12:48 UTC 2018 - tbechtold@suse.com

- Add fix-spark-home-and-conf.patch
  The patch fixes SPARK_HOME and SPARK_CONF_DIR in the different
  bin/spark-* scripts.

-------------------------------------------------------------------
Thu Mar  8 01:08:05 UTC 2018 - ashwin.agate@suse.com

- Added SPARK_DAEMON_JAVA_OPTS to set java heap size
  settings in spark-worker and spark-master service
  files.

-------------------------------------------------------------------
Tue Mar  6 04:46:58 UTC 2018 - tbechtold@suse.com

- Install /etc/spark/spark-env . This script is automatically
  read during startup and can be used for custom configuration
- Install /etc/spark/spark-defaults.conf
- Create /run/spark dir via systemd tmpfiles
- Add missing Requires/BuildRequires for systemd
- Drop openstack-suse-macros BuildRequires and use the typical
  way to create a spark user/group and homedir
- Add useful description

-------------------------------------------------------------------
Fri Feb 23 20:44:07 UTC 2018 - dmueller@suse.com

- cleanup spec file

-------------------------------------------------------------------
Fri Feb 23 04:20:00 UTC 2018 - jodavis@suse.com

- Fix spark-worker.service to use port 7077, avoiding conflict (bsc#1081275)

-------------------------------------------------------------------
Mon Feb 19 10:53:44 UTC 2018 - tbechtold@suse.com

- Fix ExecStartPre bash syntax in spark-worker.service (bsc#1081275)

-------------------------------------------------------------------
Mon Jul 24 20:55:47 UTC 2017 - jbrownell@suse.com

- Initial package
openSUSE Build Service is sponsored by