Using Trend as a feature in time series sliding window?












2












$begingroup$


I have a time series, and i am using overlapping sliding window to extract features from each window and label it accordingly. In this Overlapping window of size n, i want to extract trend (linear fit) and use as a feature, any ideas how can i use it, for now i am taking slope coefficient and intercept as a feature from each window.

is taking slope and intercept of linear fit in a window as feature is okay, or are there any other techniques i can use?
Because my goal is to consider trend of each window in to DL model.



Please give your suggestions and ask for more clarification if required.



UPDATE:



Data plot with sliding window visual



The blue box is window_1 (blue box), i am extracting features for each window and labeling them, as we can see the linear trend (approx, red line) in this window is slightly upward, and in window_2 (violet box) it is downward, i want to put this information in terms of features, how to do that? is taking coefficients of slope and intercept sufficient or something else i should do?

So in Deep Learning model entire window will be one observation.



Thanks, and i hope it should be more clear now.










share|cite|improve this question











$endgroup$

















    2












    $begingroup$


    I have a time series, and i am using overlapping sliding window to extract features from each window and label it accordingly. In this Overlapping window of size n, i want to extract trend (linear fit) and use as a feature, any ideas how can i use it, for now i am taking slope coefficient and intercept as a feature from each window.

    is taking slope and intercept of linear fit in a window as feature is okay, or are there any other techniques i can use?
    Because my goal is to consider trend of each window in to DL model.



    Please give your suggestions and ask for more clarification if required.



    UPDATE:



    Data plot with sliding window visual



    The blue box is window_1 (blue box), i am extracting features for each window and labeling them, as we can see the linear trend (approx, red line) in this window is slightly upward, and in window_2 (violet box) it is downward, i want to put this information in terms of features, how to do that? is taking coefficients of slope and intercept sufficient or something else i should do?

    So in Deep Learning model entire window will be one observation.



    Thanks, and i hope it should be more clear now.










    share|cite|improve this question











    $endgroup$















      2












      2








      2





      $begingroup$


      I have a time series, and i am using overlapping sliding window to extract features from each window and label it accordingly. In this Overlapping window of size n, i want to extract trend (linear fit) and use as a feature, any ideas how can i use it, for now i am taking slope coefficient and intercept as a feature from each window.

      is taking slope and intercept of linear fit in a window as feature is okay, or are there any other techniques i can use?
      Because my goal is to consider trend of each window in to DL model.



      Please give your suggestions and ask for more clarification if required.



      UPDATE:



      Data plot with sliding window visual



      The blue box is window_1 (blue box), i am extracting features for each window and labeling them, as we can see the linear trend (approx, red line) in this window is slightly upward, and in window_2 (violet box) it is downward, i want to put this information in terms of features, how to do that? is taking coefficients of slope and intercept sufficient or something else i should do?

      So in Deep Learning model entire window will be one observation.



      Thanks, and i hope it should be more clear now.










      share|cite|improve this question











      $endgroup$




      I have a time series, and i am using overlapping sliding window to extract features from each window and label it accordingly. In this Overlapping window of size n, i want to extract trend (linear fit) and use as a feature, any ideas how can i use it, for now i am taking slope coefficient and intercept as a feature from each window.

      is taking slope and intercept of linear fit in a window as feature is okay, or are there any other techniques i can use?
      Because my goal is to consider trend of each window in to DL model.



      Please give your suggestions and ask for more clarification if required.



      UPDATE:



      Data plot with sliding window visual



      The blue box is window_1 (blue box), i am extracting features for each window and labeling them, as we can see the linear trend (approx, red line) in this window is slightly upward, and in window_2 (violet box) it is downward, i want to put this information in terms of features, how to do that? is taking coefficients of slope and intercept sufficient or something else i should do?

      So in Deep Learning model entire window will be one observation.



      Thanks, and i hope it should be more clear now.







      machine-learning time-series deep-learning predictive-models feature-engineering






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Feb 5 at 7:42







      JAbr

















      asked Feb 4 at 7:57









      JAbrJAbr

      1816




      1816






















          4 Answers
          4






          active

          oldest

          votes


















          4












          $begingroup$

          We often de-trend the time-series before modeling other aspects of it. Depending on your needs, you can use differencing, i.e. model $y'_t = y_t - y_{t-m}$, or subtract the trend, e.g. using linear regression $y_t = beta_0 + T_t + varepsilon_t$ ($beta_0$ is intercept and $T_t = beta_1 t$ is the parameter for time), next subtract the estimated trend $tilde{y}_t = y_t - (beta_0 + T_t) $ and then model the $tilde{y}_t$ using another algorithm. Notice that second approach is a direct consequence of assuming additive trend, e.g. $y_t = T_t + f(X_t) + varepsilon_t$. When subtracting the trend $T_t$, it does not have to be linear trend, it can be smooth function (e.g. LOESS), piecewise-linear, moving average, but also averaging linear trends in overlapping windows, or something else.



          Beyond simplicity, when modeling de-trended series you also easily visualize the series to gain better insights (what do you see when plotting de-trended series?) or notice problems with the assumed trend (e.g. "jumps" at the borders of windows, when using piece-wise trend components). The drawback of using two-stage approach, is that when estimating trend and other components using a single model, you would be also correcting for changes related to the other components, rather then ignoring them as with estimating the trend independently.



          I don't see how using the slope parameters as features would have any added effect over de-trending the series. If you did so, your network would still need to learn how to use this information, so why not use model that learns the trend by itself (by adding time as a feature)?




          † - like in in STL decomposition, $y_t = T_t + S_t + varepsilon_t$, where $T_t$ is trend and $S_t$ is seasonality and $varepsilon_t$ is random noise, or in Prophet, $y_t = T_t + S_t + H_t + varepsilon_t$, where $T_t$ is piecewice-liear trend, $S_t$ is seasonality and $H_t$ are holiday effects (Taylor and Letham, 2017).






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            This doesn't answer OPs question. Seems like they are approximating a piecewise trend with very high-frequency data. But OP needs to give more details.
            $endgroup$
            – generic_user
            Feb 4 at 13:22










          • $begingroup$
            @generic_user it does, I didn't say that the trend needs to be linear in here, I gave example of piecewise-linear trend. Nothing about high-frequency data was mentioned by OP.
            $endgroup$
            – Tim
            Feb 4 at 14:07










          • $begingroup$
            Thats right Tim, i understood what you meant, but problem is slightly different. i am updating my question with visuals to make things more clear. Thanks a lot to all of you.
            $endgroup$
            – JAbr
            Feb 5 at 7:24










          • $begingroup$
            @JAbr see my edits.
            $endgroup$
            – Tim
            Feb 5 at 11:06










          • $begingroup$
            @JAbr I don't understand what you mean. What I am suggesting is that you estimate the trend somehow and subtract it from the original series (or use differencing), then model the de-trended series. You can use moving windows, or something else, it is your choice, depending on characteristics of the data.
            $endgroup$
            – Tim
            Feb 5 at 11:37





















          2












          $begingroup$

          The main thing to be careful of in predictive time series models is to avoid leaks from the future. The classic example is where you calculate the moving average or slope centered on the current point. This will include N points to the left and right, and the points to the right are from the future. You won't have those points when your model is actually going to be used.



          So you need to think exactly what information your model will have at the point you want to make a prediction and not build your model on any other data. (That's why people use trailing statistics, though moving averages never really make sense, even if you use the less-wrong trailing version.)



          In answer to your comment, I am uncomfortable with your proposed method for several reasons, but first let me say that I think your instinct on this is probably good: you don't want to come up with an ad hoc method for formalizing what you see in time series as you look at them. None of this "Well, if I subtract the 3rd, 5th, and 7th values from the average and take the largest...", and that's a good instinct.



          But your method introduces other complications that worry me:




          1. You’ll probably end up using a regression method which is thrown off by outliers. In that case a single spike point can cause it to calculate a misleading slope.


          2. Do you understand regression and it’s disgnostics? Are you going to grab the slope and intercept coefficients without looking at their “statistical significance”? If not, what will you do if they are not significant?



          3. If you do understand regression diagnostics, do you know how they can be inaccurate (overly optimistic) when applied to time series, and how to correct for that?



            Calculating trends by regression is trotting out machinery that is often more complex than people realize and yet not as powerful as State Space approaches, for example, that could give you direct answers.



            That’s why I’m encouraging you to think of other features that might be simpler and more directly address your actual targets.




          4. Are you sure that there really are up and down segments? From what you say there's no overall trend or any seasonality, so the series is basically wandering around. Depending on the autocorrelation, this can make series that look like noise or that look like something's going on. But it really isn't. Using a complex tool like regression could fool you into thinking what you're seeing must be real, when it might just be noise.



            If you're using R, try: plot (arima.sim (n=50, list (ar=c(0.882)))) several times in a row. You'll see fascinating patterns, but...




          On the other hand, if you are rigorous with training and testing and avoiding future leaks, you can certainly try the regression slope method and not care that much if it’s an approved idea or not: the proof is in the predicting. If it predicts well — and you’re not messing something up to fool yourself about your out-of-sample accuracy — it’s good enough. You might not be able to reverse-engineer your model for goodvexplanations of why, but if that’s not your concern, no need to worry. (And it might be slow to deploy on a large scale, but that’s another matter.)



          Pre-edit continuation of answer:



          Also, we love to calculate trends by throwing a set of data into a OLS regression. But is that actually necessary? Is the trend in this case just the first value in the window subtracted from the last? Or do you care about the number of positive point-to-point deltas versus negative deltas instead?



          Last, you can't just use an OLS regression naively on a time series. Time series are autocorrelated, which violates an assumption of OLS. (The main effect is that your diagnostics are too optimistic and you think your slope is statistically different from 0 and it's not.) Not to mention that time series have seasonal effects. That's why there's old-school ARIMA. (A more modern approach is State Space, also known as Kalman, but that's probably too far afield for your simple application, hence my question about whether you even really need to do regressions at all.)






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks Wayne wonderful and well thought answer. To your questions the time series i am working with is sensor data which generally has no trend and seasonality, i already checked for ACF and stationarity, but when i am taking smaller windows of this time series it actually has some trends going up and down, in my deep learning regression model i want to put this information in terms of features. Is that a good approach if i just fit a line on a window and take coefficients of slope and intercept?
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            you also mentioned taking first and last data point in a window that should do the thing but it is not capturing all the data patterns e.g. first and last data point can be at same value, but if you fit a line to this window you will find some slope of line going or down.
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            @JAbr Edited my answer.
            $endgroup$
            – Wayne
            Feb 6 at 0:30



















          1












          $begingroup$

          An alternative, perhaps more principled way, of estimating "local" trends is to use structural models "à la Harvey". These are models which can be cast in state-space form and allow explicit consideration of trend and seasonality among other things. They can
          be supplemented with fixed effects and be made more elaborate as needed.



          Software is widely available: in R, function StructTS fits several different structural models, and there are packages for multivariate series as well.



          Aside from the books by Harvey, Durbin and Koopman(2011), Time Series Analysis by State Space Methods, Oxford Univ Press, is a very nice introduction.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thanks a lot Tusell, i will definitely look in to it, seems quite interesting.
            $endgroup$
            – JAbr
            Feb 5 at 7:19





















          1












          $begingroup$

          I cannot comment. Hence posting as an answer.



          If you are not using Deep Learning, Is it mandatory for you to use overlapping sliding window? This technique is generally used in Deep Learning Sequence models where you convert your time series data into 'n' small sequences making them as 'n' training examples.



          For standard Time series solution using ARIMA, you can use Tim's method.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Yes, i am doing the same thing, and not necessary i should do that but during my research i should try everything out.
            $endgroup$
            – JAbr
            Feb 5 at 7:45











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "65"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f390674%2fusing-trend-as-a-feature-in-time-series-sliding-window%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4












          $begingroup$

          We often de-trend the time-series before modeling other aspects of it. Depending on your needs, you can use differencing, i.e. model $y'_t = y_t - y_{t-m}$, or subtract the trend, e.g. using linear regression $y_t = beta_0 + T_t + varepsilon_t$ ($beta_0$ is intercept and $T_t = beta_1 t$ is the parameter for time), next subtract the estimated trend $tilde{y}_t = y_t - (beta_0 + T_t) $ and then model the $tilde{y}_t$ using another algorithm. Notice that second approach is a direct consequence of assuming additive trend, e.g. $y_t = T_t + f(X_t) + varepsilon_t$. When subtracting the trend $T_t$, it does not have to be linear trend, it can be smooth function (e.g. LOESS), piecewise-linear, moving average, but also averaging linear trends in overlapping windows, or something else.



          Beyond simplicity, when modeling de-trended series you also easily visualize the series to gain better insights (what do you see when plotting de-trended series?) or notice problems with the assumed trend (e.g. "jumps" at the borders of windows, when using piece-wise trend components). The drawback of using two-stage approach, is that when estimating trend and other components using a single model, you would be also correcting for changes related to the other components, rather then ignoring them as with estimating the trend independently.



          I don't see how using the slope parameters as features would have any added effect over de-trending the series. If you did so, your network would still need to learn how to use this information, so why not use model that learns the trend by itself (by adding time as a feature)?




          † - like in in STL decomposition, $y_t = T_t + S_t + varepsilon_t$, where $T_t$ is trend and $S_t$ is seasonality and $varepsilon_t$ is random noise, or in Prophet, $y_t = T_t + S_t + H_t + varepsilon_t$, where $T_t$ is piecewice-liear trend, $S_t$ is seasonality and $H_t$ are holiday effects (Taylor and Letham, 2017).






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            This doesn't answer OPs question. Seems like they are approximating a piecewise trend with very high-frequency data. But OP needs to give more details.
            $endgroup$
            – generic_user
            Feb 4 at 13:22










          • $begingroup$
            @generic_user it does, I didn't say that the trend needs to be linear in here, I gave example of piecewise-linear trend. Nothing about high-frequency data was mentioned by OP.
            $endgroup$
            – Tim
            Feb 4 at 14:07










          • $begingroup$
            Thats right Tim, i understood what you meant, but problem is slightly different. i am updating my question with visuals to make things more clear. Thanks a lot to all of you.
            $endgroup$
            – JAbr
            Feb 5 at 7:24










          • $begingroup$
            @JAbr see my edits.
            $endgroup$
            – Tim
            Feb 5 at 11:06










          • $begingroup$
            @JAbr I don't understand what you mean. What I am suggesting is that you estimate the trend somehow and subtract it from the original series (or use differencing), then model the de-trended series. You can use moving windows, or something else, it is your choice, depending on characteristics of the data.
            $endgroup$
            – Tim
            Feb 5 at 11:37


















          4












          $begingroup$

          We often de-trend the time-series before modeling other aspects of it. Depending on your needs, you can use differencing, i.e. model $y'_t = y_t - y_{t-m}$, or subtract the trend, e.g. using linear regression $y_t = beta_0 + T_t + varepsilon_t$ ($beta_0$ is intercept and $T_t = beta_1 t$ is the parameter for time), next subtract the estimated trend $tilde{y}_t = y_t - (beta_0 + T_t) $ and then model the $tilde{y}_t$ using another algorithm. Notice that second approach is a direct consequence of assuming additive trend, e.g. $y_t = T_t + f(X_t) + varepsilon_t$. When subtracting the trend $T_t$, it does not have to be linear trend, it can be smooth function (e.g. LOESS), piecewise-linear, moving average, but also averaging linear trends in overlapping windows, or something else.



          Beyond simplicity, when modeling de-trended series you also easily visualize the series to gain better insights (what do you see when plotting de-trended series?) or notice problems with the assumed trend (e.g. "jumps" at the borders of windows, when using piece-wise trend components). The drawback of using two-stage approach, is that when estimating trend and other components using a single model, you would be also correcting for changes related to the other components, rather then ignoring them as with estimating the trend independently.



          I don't see how using the slope parameters as features would have any added effect over de-trending the series. If you did so, your network would still need to learn how to use this information, so why not use model that learns the trend by itself (by adding time as a feature)?




          † - like in in STL decomposition, $y_t = T_t + S_t + varepsilon_t$, where $T_t$ is trend and $S_t$ is seasonality and $varepsilon_t$ is random noise, or in Prophet, $y_t = T_t + S_t + H_t + varepsilon_t$, where $T_t$ is piecewice-liear trend, $S_t$ is seasonality and $H_t$ are holiday effects (Taylor and Letham, 2017).






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            This doesn't answer OPs question. Seems like they are approximating a piecewise trend with very high-frequency data. But OP needs to give more details.
            $endgroup$
            – generic_user
            Feb 4 at 13:22










          • $begingroup$
            @generic_user it does, I didn't say that the trend needs to be linear in here, I gave example of piecewise-linear trend. Nothing about high-frequency data was mentioned by OP.
            $endgroup$
            – Tim
            Feb 4 at 14:07










          • $begingroup$
            Thats right Tim, i understood what you meant, but problem is slightly different. i am updating my question with visuals to make things more clear. Thanks a lot to all of you.
            $endgroup$
            – JAbr
            Feb 5 at 7:24










          • $begingroup$
            @JAbr see my edits.
            $endgroup$
            – Tim
            Feb 5 at 11:06










          • $begingroup$
            @JAbr I don't understand what you mean. What I am suggesting is that you estimate the trend somehow and subtract it from the original series (or use differencing), then model the de-trended series. You can use moving windows, or something else, it is your choice, depending on characteristics of the data.
            $endgroup$
            – Tim
            Feb 5 at 11:37
















          4












          4








          4





          $begingroup$

          We often de-trend the time-series before modeling other aspects of it. Depending on your needs, you can use differencing, i.e. model $y'_t = y_t - y_{t-m}$, or subtract the trend, e.g. using linear regression $y_t = beta_0 + T_t + varepsilon_t$ ($beta_0$ is intercept and $T_t = beta_1 t$ is the parameter for time), next subtract the estimated trend $tilde{y}_t = y_t - (beta_0 + T_t) $ and then model the $tilde{y}_t$ using another algorithm. Notice that second approach is a direct consequence of assuming additive trend, e.g. $y_t = T_t + f(X_t) + varepsilon_t$. When subtracting the trend $T_t$, it does not have to be linear trend, it can be smooth function (e.g. LOESS), piecewise-linear, moving average, but also averaging linear trends in overlapping windows, or something else.



          Beyond simplicity, when modeling de-trended series you also easily visualize the series to gain better insights (what do you see when plotting de-trended series?) or notice problems with the assumed trend (e.g. "jumps" at the borders of windows, when using piece-wise trend components). The drawback of using two-stage approach, is that when estimating trend and other components using a single model, you would be also correcting for changes related to the other components, rather then ignoring them as with estimating the trend independently.



          I don't see how using the slope parameters as features would have any added effect over de-trending the series. If you did so, your network would still need to learn how to use this information, so why not use model that learns the trend by itself (by adding time as a feature)?




          † - like in in STL decomposition, $y_t = T_t + S_t + varepsilon_t$, where $T_t$ is trend and $S_t$ is seasonality and $varepsilon_t$ is random noise, or in Prophet, $y_t = T_t + S_t + H_t + varepsilon_t$, where $T_t$ is piecewice-liear trend, $S_t$ is seasonality and $H_t$ are holiday effects (Taylor and Letham, 2017).






          share|cite|improve this answer











          $endgroup$



          We often de-trend the time-series before modeling other aspects of it. Depending on your needs, you can use differencing, i.e. model $y'_t = y_t - y_{t-m}$, or subtract the trend, e.g. using linear regression $y_t = beta_0 + T_t + varepsilon_t$ ($beta_0$ is intercept and $T_t = beta_1 t$ is the parameter for time), next subtract the estimated trend $tilde{y}_t = y_t - (beta_0 + T_t) $ and then model the $tilde{y}_t$ using another algorithm. Notice that second approach is a direct consequence of assuming additive trend, e.g. $y_t = T_t + f(X_t) + varepsilon_t$. When subtracting the trend $T_t$, it does not have to be linear trend, it can be smooth function (e.g. LOESS), piecewise-linear, moving average, but also averaging linear trends in overlapping windows, or something else.



          Beyond simplicity, when modeling de-trended series you also easily visualize the series to gain better insights (what do you see when plotting de-trended series?) or notice problems with the assumed trend (e.g. "jumps" at the borders of windows, when using piece-wise trend components). The drawback of using two-stage approach, is that when estimating trend and other components using a single model, you would be also correcting for changes related to the other components, rather then ignoring them as with estimating the trend independently.



          I don't see how using the slope parameters as features would have any added effect over de-trending the series. If you did so, your network would still need to learn how to use this information, so why not use model that learns the trend by itself (by adding time as a feature)?




          † - like in in STL decomposition, $y_t = T_t + S_t + varepsilon_t$, where $T_t$ is trend and $S_t$ is seasonality and $varepsilon_t$ is random noise, or in Prophet, $y_t = T_t + S_t + H_t + varepsilon_t$, where $T_t$ is piecewice-liear trend, $S_t$ is seasonality and $H_t$ are holiday effects (Taylor and Letham, 2017).







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Feb 5 at 11:05

























          answered Feb 4 at 11:32









          TimTim

          58.4k9128221




          58.4k9128221












          • $begingroup$
            This doesn't answer OPs question. Seems like they are approximating a piecewise trend with very high-frequency data. But OP needs to give more details.
            $endgroup$
            – generic_user
            Feb 4 at 13:22










          • $begingroup$
            @generic_user it does, I didn't say that the trend needs to be linear in here, I gave example of piecewise-linear trend. Nothing about high-frequency data was mentioned by OP.
            $endgroup$
            – Tim
            Feb 4 at 14:07










          • $begingroup$
            Thats right Tim, i understood what you meant, but problem is slightly different. i am updating my question with visuals to make things more clear. Thanks a lot to all of you.
            $endgroup$
            – JAbr
            Feb 5 at 7:24










          • $begingroup$
            @JAbr see my edits.
            $endgroup$
            – Tim
            Feb 5 at 11:06










          • $begingroup$
            @JAbr I don't understand what you mean. What I am suggesting is that you estimate the trend somehow and subtract it from the original series (or use differencing), then model the de-trended series. You can use moving windows, or something else, it is your choice, depending on characteristics of the data.
            $endgroup$
            – Tim
            Feb 5 at 11:37




















          • $begingroup$
            This doesn't answer OPs question. Seems like they are approximating a piecewise trend with very high-frequency data. But OP needs to give more details.
            $endgroup$
            – generic_user
            Feb 4 at 13:22










          • $begingroup$
            @generic_user it does, I didn't say that the trend needs to be linear in here, I gave example of piecewise-linear trend. Nothing about high-frequency data was mentioned by OP.
            $endgroup$
            – Tim
            Feb 4 at 14:07










          • $begingroup$
            Thats right Tim, i understood what you meant, but problem is slightly different. i am updating my question with visuals to make things more clear. Thanks a lot to all of you.
            $endgroup$
            – JAbr
            Feb 5 at 7:24










          • $begingroup$
            @JAbr see my edits.
            $endgroup$
            – Tim
            Feb 5 at 11:06










          • $begingroup$
            @JAbr I don't understand what you mean. What I am suggesting is that you estimate the trend somehow and subtract it from the original series (or use differencing), then model the de-trended series. You can use moving windows, or something else, it is your choice, depending on characteristics of the data.
            $endgroup$
            – Tim
            Feb 5 at 11:37


















          $begingroup$
          This doesn't answer OPs question. Seems like they are approximating a piecewise trend with very high-frequency data. But OP needs to give more details.
          $endgroup$
          – generic_user
          Feb 4 at 13:22




          $begingroup$
          This doesn't answer OPs question. Seems like they are approximating a piecewise trend with very high-frequency data. But OP needs to give more details.
          $endgroup$
          – generic_user
          Feb 4 at 13:22












          $begingroup$
          @generic_user it does, I didn't say that the trend needs to be linear in here, I gave example of piecewise-linear trend. Nothing about high-frequency data was mentioned by OP.
          $endgroup$
          – Tim
          Feb 4 at 14:07




          $begingroup$
          @generic_user it does, I didn't say that the trend needs to be linear in here, I gave example of piecewise-linear trend. Nothing about high-frequency data was mentioned by OP.
          $endgroup$
          – Tim
          Feb 4 at 14:07












          $begingroup$
          Thats right Tim, i understood what you meant, but problem is slightly different. i am updating my question with visuals to make things more clear. Thanks a lot to all of you.
          $endgroup$
          – JAbr
          Feb 5 at 7:24




          $begingroup$
          Thats right Tim, i understood what you meant, but problem is slightly different. i am updating my question with visuals to make things more clear. Thanks a lot to all of you.
          $endgroup$
          – JAbr
          Feb 5 at 7:24












          $begingroup$
          @JAbr see my edits.
          $endgroup$
          – Tim
          Feb 5 at 11:06




          $begingroup$
          @JAbr see my edits.
          $endgroup$
          – Tim
          Feb 5 at 11:06












          $begingroup$
          @JAbr I don't understand what you mean. What I am suggesting is that you estimate the trend somehow and subtract it from the original series (or use differencing), then model the de-trended series. You can use moving windows, or something else, it is your choice, depending on characteristics of the data.
          $endgroup$
          – Tim
          Feb 5 at 11:37






          $begingroup$
          @JAbr I don't understand what you mean. What I am suggesting is that you estimate the trend somehow and subtract it from the original series (or use differencing), then model the de-trended series. You can use moving windows, or something else, it is your choice, depending on characteristics of the data.
          $endgroup$
          – Tim
          Feb 5 at 11:37















          2












          $begingroup$

          The main thing to be careful of in predictive time series models is to avoid leaks from the future. The classic example is where you calculate the moving average or slope centered on the current point. This will include N points to the left and right, and the points to the right are from the future. You won't have those points when your model is actually going to be used.



          So you need to think exactly what information your model will have at the point you want to make a prediction and not build your model on any other data. (That's why people use trailing statistics, though moving averages never really make sense, even if you use the less-wrong trailing version.)



          In answer to your comment, I am uncomfortable with your proposed method for several reasons, but first let me say that I think your instinct on this is probably good: you don't want to come up with an ad hoc method for formalizing what you see in time series as you look at them. None of this "Well, if I subtract the 3rd, 5th, and 7th values from the average and take the largest...", and that's a good instinct.



          But your method introduces other complications that worry me:




          1. You’ll probably end up using a regression method which is thrown off by outliers. In that case a single spike point can cause it to calculate a misleading slope.


          2. Do you understand regression and it’s disgnostics? Are you going to grab the slope and intercept coefficients without looking at their “statistical significance”? If not, what will you do if they are not significant?



          3. If you do understand regression diagnostics, do you know how they can be inaccurate (overly optimistic) when applied to time series, and how to correct for that?



            Calculating trends by regression is trotting out machinery that is often more complex than people realize and yet not as powerful as State Space approaches, for example, that could give you direct answers.



            That’s why I’m encouraging you to think of other features that might be simpler and more directly address your actual targets.




          4. Are you sure that there really are up and down segments? From what you say there's no overall trend or any seasonality, so the series is basically wandering around. Depending on the autocorrelation, this can make series that look like noise or that look like something's going on. But it really isn't. Using a complex tool like regression could fool you into thinking what you're seeing must be real, when it might just be noise.



            If you're using R, try: plot (arima.sim (n=50, list (ar=c(0.882)))) several times in a row. You'll see fascinating patterns, but...




          On the other hand, if you are rigorous with training and testing and avoiding future leaks, you can certainly try the regression slope method and not care that much if it’s an approved idea or not: the proof is in the predicting. If it predicts well — and you’re not messing something up to fool yourself about your out-of-sample accuracy — it’s good enough. You might not be able to reverse-engineer your model for goodvexplanations of why, but if that’s not your concern, no need to worry. (And it might be slow to deploy on a large scale, but that’s another matter.)



          Pre-edit continuation of answer:



          Also, we love to calculate trends by throwing a set of data into a OLS regression. But is that actually necessary? Is the trend in this case just the first value in the window subtracted from the last? Or do you care about the number of positive point-to-point deltas versus negative deltas instead?



          Last, you can't just use an OLS regression naively on a time series. Time series are autocorrelated, which violates an assumption of OLS. (The main effect is that your diagnostics are too optimistic and you think your slope is statistically different from 0 and it's not.) Not to mention that time series have seasonal effects. That's why there's old-school ARIMA. (A more modern approach is State Space, also known as Kalman, but that's probably too far afield for your simple application, hence my question about whether you even really need to do regressions at all.)






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks Wayne wonderful and well thought answer. To your questions the time series i am working with is sensor data which generally has no trend and seasonality, i already checked for ACF and stationarity, but when i am taking smaller windows of this time series it actually has some trends going up and down, in my deep learning regression model i want to put this information in terms of features. Is that a good approach if i just fit a line on a window and take coefficients of slope and intercept?
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            you also mentioned taking first and last data point in a window that should do the thing but it is not capturing all the data patterns e.g. first and last data point can be at same value, but if you fit a line to this window you will find some slope of line going or down.
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            @JAbr Edited my answer.
            $endgroup$
            – Wayne
            Feb 6 at 0:30
















          2












          $begingroup$

          The main thing to be careful of in predictive time series models is to avoid leaks from the future. The classic example is where you calculate the moving average or slope centered on the current point. This will include N points to the left and right, and the points to the right are from the future. You won't have those points when your model is actually going to be used.



          So you need to think exactly what information your model will have at the point you want to make a prediction and not build your model on any other data. (That's why people use trailing statistics, though moving averages never really make sense, even if you use the less-wrong trailing version.)



          In answer to your comment, I am uncomfortable with your proposed method for several reasons, but first let me say that I think your instinct on this is probably good: you don't want to come up with an ad hoc method for formalizing what you see in time series as you look at them. None of this "Well, if I subtract the 3rd, 5th, and 7th values from the average and take the largest...", and that's a good instinct.



          But your method introduces other complications that worry me:




          1. You’ll probably end up using a regression method which is thrown off by outliers. In that case a single spike point can cause it to calculate a misleading slope.


          2. Do you understand regression and it’s disgnostics? Are you going to grab the slope and intercept coefficients without looking at their “statistical significance”? If not, what will you do if they are not significant?



          3. If you do understand regression diagnostics, do you know how they can be inaccurate (overly optimistic) when applied to time series, and how to correct for that?



            Calculating trends by regression is trotting out machinery that is often more complex than people realize and yet not as powerful as State Space approaches, for example, that could give you direct answers.



            That’s why I’m encouraging you to think of other features that might be simpler and more directly address your actual targets.




          4. Are you sure that there really are up and down segments? From what you say there's no overall trend or any seasonality, so the series is basically wandering around. Depending on the autocorrelation, this can make series that look like noise or that look like something's going on. But it really isn't. Using a complex tool like regression could fool you into thinking what you're seeing must be real, when it might just be noise.



            If you're using R, try: plot (arima.sim (n=50, list (ar=c(0.882)))) several times in a row. You'll see fascinating patterns, but...




          On the other hand, if you are rigorous with training and testing and avoiding future leaks, you can certainly try the regression slope method and not care that much if it’s an approved idea or not: the proof is in the predicting. If it predicts well — and you’re not messing something up to fool yourself about your out-of-sample accuracy — it’s good enough. You might not be able to reverse-engineer your model for goodvexplanations of why, but if that’s not your concern, no need to worry. (And it might be slow to deploy on a large scale, but that’s another matter.)



          Pre-edit continuation of answer:



          Also, we love to calculate trends by throwing a set of data into a OLS regression. But is that actually necessary? Is the trend in this case just the first value in the window subtracted from the last? Or do you care about the number of positive point-to-point deltas versus negative deltas instead?



          Last, you can't just use an OLS regression naively on a time series. Time series are autocorrelated, which violates an assumption of OLS. (The main effect is that your diagnostics are too optimistic and you think your slope is statistically different from 0 and it's not.) Not to mention that time series have seasonal effects. That's why there's old-school ARIMA. (A more modern approach is State Space, also known as Kalman, but that's probably too far afield for your simple application, hence my question about whether you even really need to do regressions at all.)






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks Wayne wonderful and well thought answer. To your questions the time series i am working with is sensor data which generally has no trend and seasonality, i already checked for ACF and stationarity, but when i am taking smaller windows of this time series it actually has some trends going up and down, in my deep learning regression model i want to put this information in terms of features. Is that a good approach if i just fit a line on a window and take coefficients of slope and intercept?
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            you also mentioned taking first and last data point in a window that should do the thing but it is not capturing all the data patterns e.g. first and last data point can be at same value, but if you fit a line to this window you will find some slope of line going or down.
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            @JAbr Edited my answer.
            $endgroup$
            – Wayne
            Feb 6 at 0:30














          2












          2








          2





          $begingroup$

          The main thing to be careful of in predictive time series models is to avoid leaks from the future. The classic example is where you calculate the moving average or slope centered on the current point. This will include N points to the left and right, and the points to the right are from the future. You won't have those points when your model is actually going to be used.



          So you need to think exactly what information your model will have at the point you want to make a prediction and not build your model on any other data. (That's why people use trailing statistics, though moving averages never really make sense, even if you use the less-wrong trailing version.)



          In answer to your comment, I am uncomfortable with your proposed method for several reasons, but first let me say that I think your instinct on this is probably good: you don't want to come up with an ad hoc method for formalizing what you see in time series as you look at them. None of this "Well, if I subtract the 3rd, 5th, and 7th values from the average and take the largest...", and that's a good instinct.



          But your method introduces other complications that worry me:




          1. You’ll probably end up using a regression method which is thrown off by outliers. In that case a single spike point can cause it to calculate a misleading slope.


          2. Do you understand regression and it’s disgnostics? Are you going to grab the slope and intercept coefficients without looking at their “statistical significance”? If not, what will you do if they are not significant?



          3. If you do understand regression diagnostics, do you know how they can be inaccurate (overly optimistic) when applied to time series, and how to correct for that?



            Calculating trends by regression is trotting out machinery that is often more complex than people realize and yet not as powerful as State Space approaches, for example, that could give you direct answers.



            That’s why I’m encouraging you to think of other features that might be simpler and more directly address your actual targets.




          4. Are you sure that there really are up and down segments? From what you say there's no overall trend or any seasonality, so the series is basically wandering around. Depending on the autocorrelation, this can make series that look like noise or that look like something's going on. But it really isn't. Using a complex tool like regression could fool you into thinking what you're seeing must be real, when it might just be noise.



            If you're using R, try: plot (arima.sim (n=50, list (ar=c(0.882)))) several times in a row. You'll see fascinating patterns, but...




          On the other hand, if you are rigorous with training and testing and avoiding future leaks, you can certainly try the regression slope method and not care that much if it’s an approved idea or not: the proof is in the predicting. If it predicts well — and you’re not messing something up to fool yourself about your out-of-sample accuracy — it’s good enough. You might not be able to reverse-engineer your model for goodvexplanations of why, but if that’s not your concern, no need to worry. (And it might be slow to deploy on a large scale, but that’s another matter.)



          Pre-edit continuation of answer:



          Also, we love to calculate trends by throwing a set of data into a OLS regression. But is that actually necessary? Is the trend in this case just the first value in the window subtracted from the last? Or do you care about the number of positive point-to-point deltas versus negative deltas instead?



          Last, you can't just use an OLS regression naively on a time series. Time series are autocorrelated, which violates an assumption of OLS. (The main effect is that your diagnostics are too optimistic and you think your slope is statistically different from 0 and it's not.) Not to mention that time series have seasonal effects. That's why there's old-school ARIMA. (A more modern approach is State Space, also known as Kalman, but that's probably too far afield for your simple application, hence my question about whether you even really need to do regressions at all.)






          share|cite|improve this answer











          $endgroup$



          The main thing to be careful of in predictive time series models is to avoid leaks from the future. The classic example is where you calculate the moving average or slope centered on the current point. This will include N points to the left and right, and the points to the right are from the future. You won't have those points when your model is actually going to be used.



          So you need to think exactly what information your model will have at the point you want to make a prediction and not build your model on any other data. (That's why people use trailing statistics, though moving averages never really make sense, even if you use the less-wrong trailing version.)



          In answer to your comment, I am uncomfortable with your proposed method for several reasons, but first let me say that I think your instinct on this is probably good: you don't want to come up with an ad hoc method for formalizing what you see in time series as you look at them. None of this "Well, if I subtract the 3rd, 5th, and 7th values from the average and take the largest...", and that's a good instinct.



          But your method introduces other complications that worry me:




          1. You’ll probably end up using a regression method which is thrown off by outliers. In that case a single spike point can cause it to calculate a misleading slope.


          2. Do you understand regression and it’s disgnostics? Are you going to grab the slope and intercept coefficients without looking at their “statistical significance”? If not, what will you do if they are not significant?



          3. If you do understand regression diagnostics, do you know how they can be inaccurate (overly optimistic) when applied to time series, and how to correct for that?



            Calculating trends by regression is trotting out machinery that is often more complex than people realize and yet not as powerful as State Space approaches, for example, that could give you direct answers.



            That’s why I’m encouraging you to think of other features that might be simpler and more directly address your actual targets.




          4. Are you sure that there really are up and down segments? From what you say there's no overall trend or any seasonality, so the series is basically wandering around. Depending on the autocorrelation, this can make series that look like noise or that look like something's going on. But it really isn't. Using a complex tool like regression could fool you into thinking what you're seeing must be real, when it might just be noise.



            If you're using R, try: plot (arima.sim (n=50, list (ar=c(0.882)))) several times in a row. You'll see fascinating patterns, but...




          On the other hand, if you are rigorous with training and testing and avoiding future leaks, you can certainly try the regression slope method and not care that much if it’s an approved idea or not: the proof is in the predicting. If it predicts well — and you’re not messing something up to fool yourself about your out-of-sample accuracy — it’s good enough. You might not be able to reverse-engineer your model for goodvexplanations of why, but if that’s not your concern, no need to worry. (And it might be slow to deploy on a large scale, but that’s another matter.)



          Pre-edit continuation of answer:



          Also, we love to calculate trends by throwing a set of data into a OLS regression. But is that actually necessary? Is the trend in this case just the first value in the window subtracted from the last? Or do you care about the number of positive point-to-point deltas versus negative deltas instead?



          Last, you can't just use an OLS regression naively on a time series. Time series are autocorrelated, which violates an assumption of OLS. (The main effect is that your diagnostics are too optimistic and you think your slope is statistically different from 0 and it's not.) Not to mention that time series have seasonal effects. That's why there's old-school ARIMA. (A more modern approach is State Space, also known as Kalman, but that's probably too far afield for your simple application, hence my question about whether you even really need to do regressions at all.)







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Feb 5 at 23:52

























          answered Feb 4 at 15:14









          WayneWayne

          16.2k23976




          16.2k23976












          • $begingroup$
            Thanks Wayne wonderful and well thought answer. To your questions the time series i am working with is sensor data which generally has no trend and seasonality, i already checked for ACF and stationarity, but when i am taking smaller windows of this time series it actually has some trends going up and down, in my deep learning regression model i want to put this information in terms of features. Is that a good approach if i just fit a line on a window and take coefficients of slope and intercept?
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            you also mentioned taking first and last data point in a window that should do the thing but it is not capturing all the data patterns e.g. first and last data point can be at same value, but if you fit a line to this window you will find some slope of line going or down.
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            @JAbr Edited my answer.
            $endgroup$
            – Wayne
            Feb 6 at 0:30


















          • $begingroup$
            Thanks Wayne wonderful and well thought answer. To your questions the time series i am working with is sensor data which generally has no trend and seasonality, i already checked for ACF and stationarity, but when i am taking smaller windows of this time series it actually has some trends going up and down, in my deep learning regression model i want to put this information in terms of features. Is that a good approach if i just fit a line on a window and take coefficients of slope and intercept?
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            you also mentioned taking first and last data point in a window that should do the thing but it is not capturing all the data patterns e.g. first and last data point can be at same value, but if you fit a line to this window you will find some slope of line going or down.
            $endgroup$
            – JAbr
            Feb 5 at 7:18










          • $begingroup$
            @JAbr Edited my answer.
            $endgroup$
            – Wayne
            Feb 6 at 0:30
















          $begingroup$
          Thanks Wayne wonderful and well thought answer. To your questions the time series i am working with is sensor data which generally has no trend and seasonality, i already checked for ACF and stationarity, but when i am taking smaller windows of this time series it actually has some trends going up and down, in my deep learning regression model i want to put this information in terms of features. Is that a good approach if i just fit a line on a window and take coefficients of slope and intercept?
          $endgroup$
          – JAbr
          Feb 5 at 7:18




          $begingroup$
          Thanks Wayne wonderful and well thought answer. To your questions the time series i am working with is sensor data which generally has no trend and seasonality, i already checked for ACF and stationarity, but when i am taking smaller windows of this time series it actually has some trends going up and down, in my deep learning regression model i want to put this information in terms of features. Is that a good approach if i just fit a line on a window and take coefficients of slope and intercept?
          $endgroup$
          – JAbr
          Feb 5 at 7:18












          $begingroup$
          you also mentioned taking first and last data point in a window that should do the thing but it is not capturing all the data patterns e.g. first and last data point can be at same value, but if you fit a line to this window you will find some slope of line going or down.
          $endgroup$
          – JAbr
          Feb 5 at 7:18




          $begingroup$
          you also mentioned taking first and last data point in a window that should do the thing but it is not capturing all the data patterns e.g. first and last data point can be at same value, but if you fit a line to this window you will find some slope of line going or down.
          $endgroup$
          – JAbr
          Feb 5 at 7:18












          $begingroup$
          @JAbr Edited my answer.
          $endgroup$
          – Wayne
          Feb 6 at 0:30




          $begingroup$
          @JAbr Edited my answer.
          $endgroup$
          – Wayne
          Feb 6 at 0:30











          1












          $begingroup$

          An alternative, perhaps more principled way, of estimating "local" trends is to use structural models "à la Harvey". These are models which can be cast in state-space form and allow explicit consideration of trend and seasonality among other things. They can
          be supplemented with fixed effects and be made more elaborate as needed.



          Software is widely available: in R, function StructTS fits several different structural models, and there are packages for multivariate series as well.



          Aside from the books by Harvey, Durbin and Koopman(2011), Time Series Analysis by State Space Methods, Oxford Univ Press, is a very nice introduction.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thanks a lot Tusell, i will definitely look in to it, seems quite interesting.
            $endgroup$
            – JAbr
            Feb 5 at 7:19


















          1












          $begingroup$

          An alternative, perhaps more principled way, of estimating "local" trends is to use structural models "à la Harvey". These are models which can be cast in state-space form and allow explicit consideration of trend and seasonality among other things. They can
          be supplemented with fixed effects and be made more elaborate as needed.



          Software is widely available: in R, function StructTS fits several different structural models, and there are packages for multivariate series as well.



          Aside from the books by Harvey, Durbin and Koopman(2011), Time Series Analysis by State Space Methods, Oxford Univ Press, is a very nice introduction.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Thanks a lot Tusell, i will definitely look in to it, seems quite interesting.
            $endgroup$
            – JAbr
            Feb 5 at 7:19
















          1












          1








          1





          $begingroup$

          An alternative, perhaps more principled way, of estimating "local" trends is to use structural models "à la Harvey". These are models which can be cast in state-space form and allow explicit consideration of trend and seasonality among other things. They can
          be supplemented with fixed effects and be made more elaborate as needed.



          Software is widely available: in R, function StructTS fits several different structural models, and there are packages for multivariate series as well.



          Aside from the books by Harvey, Durbin and Koopman(2011), Time Series Analysis by State Space Methods, Oxford Univ Press, is a very nice introduction.






          share|cite|improve this answer









          $endgroup$



          An alternative, perhaps more principled way, of estimating "local" trends is to use structural models "à la Harvey". These are models which can be cast in state-space form and allow explicit consideration of trend and seasonality among other things. They can
          be supplemented with fixed effects and be made more elaborate as needed.



          Software is widely available: in R, function StructTS fits several different structural models, and there are packages for multivariate series as well.



          Aside from the books by Harvey, Durbin and Koopman(2011), Time Series Analysis by State Space Methods, Oxford Univ Press, is a very nice introduction.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Feb 4 at 14:55









          F. TusellF. Tusell

          5,4341424




          5,4341424












          • $begingroup$
            Thanks a lot Tusell, i will definitely look in to it, seems quite interesting.
            $endgroup$
            – JAbr
            Feb 5 at 7:19




















          • $begingroup$
            Thanks a lot Tusell, i will definitely look in to it, seems quite interesting.
            $endgroup$
            – JAbr
            Feb 5 at 7:19


















          $begingroup$
          Thanks a lot Tusell, i will definitely look in to it, seems quite interesting.
          $endgroup$
          – JAbr
          Feb 5 at 7:19






          $begingroup$
          Thanks a lot Tusell, i will definitely look in to it, seems quite interesting.
          $endgroup$
          – JAbr
          Feb 5 at 7:19













          1












          $begingroup$

          I cannot comment. Hence posting as an answer.



          If you are not using Deep Learning, Is it mandatory for you to use overlapping sliding window? This technique is generally used in Deep Learning Sequence models where you convert your time series data into 'n' small sequences making them as 'n' training examples.



          For standard Time series solution using ARIMA, you can use Tim's method.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Yes, i am doing the same thing, and not necessary i should do that but during my research i should try everything out.
            $endgroup$
            – JAbr
            Feb 5 at 7:45
















          1












          $begingroup$

          I cannot comment. Hence posting as an answer.



          If you are not using Deep Learning, Is it mandatory for you to use overlapping sliding window? This technique is generally used in Deep Learning Sequence models where you convert your time series data into 'n' small sequences making them as 'n' training examples.



          For standard Time series solution using ARIMA, you can use Tim's method.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            Yes, i am doing the same thing, and not necessary i should do that but during my research i should try everything out.
            $endgroup$
            – JAbr
            Feb 5 at 7:45














          1












          1








          1





          $begingroup$

          I cannot comment. Hence posting as an answer.



          If you are not using Deep Learning, Is it mandatory for you to use overlapping sliding window? This technique is generally used in Deep Learning Sequence models where you convert your time series data into 'n' small sequences making them as 'n' training examples.



          For standard Time series solution using ARIMA, you can use Tim's method.






          share|cite|improve this answer









          $endgroup$



          I cannot comment. Hence posting as an answer.



          If you are not using Deep Learning, Is it mandatory for you to use overlapping sliding window? This technique is generally used in Deep Learning Sequence models where you convert your time series data into 'n' small sequences making them as 'n' training examples.



          For standard Time series solution using ARIMA, you can use Tim's method.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Feb 4 at 17:44









          solver149solver149

          1684




          1684












          • $begingroup$
            Yes, i am doing the same thing, and not necessary i should do that but during my research i should try everything out.
            $endgroup$
            – JAbr
            Feb 5 at 7:45


















          • $begingroup$
            Yes, i am doing the same thing, and not necessary i should do that but during my research i should try everything out.
            $endgroup$
            – JAbr
            Feb 5 at 7:45
















          $begingroup$
          Yes, i am doing the same thing, and not necessary i should do that but during my research i should try everything out.
          $endgroup$
          – JAbr
          Feb 5 at 7:45




          $begingroup$
          Yes, i am doing the same thing, and not necessary i should do that but during my research i should try everything out.
          $endgroup$
          – JAbr
          Feb 5 at 7:45


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Cross Validated!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f390674%2fusing-trend-as-a-feature-in-time-series-sliding-window%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          flock() on closed filehandle LOCK_FILE at /usr/bin/apt-mirror

          Mangá

           ⁒  ․,‪⁊‑⁙ ⁖, ⁇‒※‌, †,⁖‗‌⁝    ‾‸⁘,‖⁔⁣,⁂‾
”‑,‥–,‬ ,⁀‹⁋‴⁑ ‒ ,‴⁋”‼ ⁨,‷⁔„ ‰′,‐‚ ‥‡‎“‷⁃⁨⁅⁣,⁔
⁇‘⁔⁡⁏⁌⁡‿‶‏⁨ ⁣⁕⁖⁨⁩⁥‽⁀  ‴‬⁜‟ ⁃‣‧⁕‮ …‍⁨‴ ⁩,⁚⁖‫ ,‵ ⁀,‮⁝‣‣ ⁑  ⁂– ․, ‾‽ ‏⁁“⁗‸ ‾… ‹‡⁌⁎‸‘ ‡⁏⁌‪ ‵⁛ ‎⁨ ―⁦⁤⁄⁕