Configuration Templates : Optimization Policies Template

Optimization Policies Template
Optimization templates apply Optimization policies to appliances.
Priority
n
With this template, you can create rules with priority from 1000 – 9999, inclusive. When you apply the template to an appliance, the Orchestrator deletes all appliance entries in that range before applying its policies.
n
If you access an appliance directly (via the WebUI or the command line interface), you can create rules with higher priority than Orchestrator rules (1 – 999) and rules with lower priority (10000 – 65534).
n
Adding a rule increments the last Priority by 10. This leaves room for you to insert a rule in between rules without having to renumber subsequent priorities. Likewise, you can just edit the number.
Source or Destination
n
n
To allow any IP address, use 0.0.0.0/0.
n
n
To allow any port, use 0.
Set Actions Definitions
n
Network Memory addresses limited bandwidth. This technology uses advanced fingerprinting algorithms to examine all incoming and outgoing WAN traffic. Network Memory localizes information and transmits only modifications between locations.
Maximize Reduction optimizes for maximum data reduction at the potential cost of slightly lower throughput and/or some increase in latency. It is appropriate for bulk data transfers such as file transfers and FTP, where bandwidth savings are the primary concern.
Minimize Latency ensures that Network Memory processing adds no latency. This may come at the cost of lower data reduction. It is appropriate for extremely latency-sensitive interactive or transactional traffic. It's also appropriate when the primary objective is to fully utilize the WAN pipe to increase the LAN-side throughput, as opposed to conserving WAN bandwidth.
Balanced is the default setting. It dynamically balances latency and data reduction objectives and is the best choice for most traffic types.
Disabled turns off Network Memory.
n
IP Header Compression is the process of compressing excess protocol headers before transmitting them on a link and uncompressing them to their original state at the other end. It's possible to compress the protocol headers due to the redundancy in header fields of the same packet, as well as in consecutive packets of a packet stream.
n
Payload Compression uses algorithms to identify relatively short byte sequences that are repeated frequently. These are then replaced with shorter segments of code to reduce the size of transmitted data. Simple algorithms can find repeated bytes within a single packet; more sophisticated algorithms can find duplication across packets and even across flows.
n
TCP Acceleration uses techniques such as selective acknowledgements, window scaling, and maximum segment size adjustment to mitigate poor performance on high-latency links.
n
Protocol Acceleration provides explicit configuration for optimizing CIFS, SSL, SRDF, Citrix, and iSCSI protocols. In a network environment, it's possible that not every appliance has the same optimization configurations enabled. Therefore, the site that initiates the flow (the client) determines the state of the protocol-specific optimization.
TCP Acceleration Options
TCP acceleration uses techniques such as selective acknowledgement, window scaling, and message segment size adjustment ot compensate for poor performance on high latency links.
This feature has a set of advanced options with default values.
CAUTION Because changing these settings can affect service, Silver Peak recommends that you do not modify these without direction from Customer Support.
 
Limits the TCP MSS (Maximum Segment Size) advertised by the end hosts in the SYN segment to a value derived from the Tunnel MTU (Maximum Transmission Unit). This is TCP MSS = Tunnel MTU – Tunnel Packet Overhead.
This feature is enabled by default so that the maximum value of the end host MSS is always coupled to the Tunnel MSS. If the end host MSS is smaller than the tunnel MSS, then the end host MSS is used instead.
Preserves the packet boundaries end to end. If this feature is disabled, then the appliances in the path can coalesce consecutive packets of a flow to use bandwidth more efficiently.
Enable Silver Peak TCP SYN option exchange
Controls whether or not Silver Peak forwards its proprietary TCP SYN option on the LAN side. Enabled by default, this feature detects if there are more than two Silver Peak appliances in the flow's data path, and optimizes accordingly.
Tries to override asymmetric route policy settings. It emulates auto-opt behavior by using the same tunnel for the returning SYN+ACK as it did for the original SYN packet.
Disable this feature if the asymmetric route policy setting is necessary to correctly route packets. In that case, you may need to configure flow redirection to ensure optimization of TCP flows.
NOTE: Whether this feature is enabled or not, the default behavior when a tunnel goes Down is to automatically reset the flows.
Resetting all unaccelerated TCP flows that are associated with a normally operating Tunnel, where:
- TCP acceleration is enabled
- SYN packet was not seen (so this flow was either part of WCCP redirection, or it already existed when the appliance was inserted in the data path).
If selected and if the appliance doesn’t receive a TCP SYN-ACK from the remote end within 5 seconds, the flow proceeds without acceleration and the destination IP address is blacklisted for one minute.
This feature helps to fine tune TCP behavior during a connection’s graceful shutdown event. When this feature is ON (Default), TCP on the local appliance synchronizes this graceful shutdown of the local LAN side with the remote Silver Peak’s LAN side. When this feature is OFF (Default TCP), no such synchronization happens and the two LAN segments at the ends gracefully shutdown independently.
This is the WAN-side TCP Window scale factor that Silver Peak uses internally for its WAN-side traffic. This is independent of the WAN-side factor advertised by the end hosts.
Resets all flows that consume a disproportionate amount of buffer and have a very slow throughput on the LAN side. Owing to a few slower end hosts or a lossy LAN, these flows affect the performance of all other flows such that no flows see the customary throughput improvement gained through TCP acceleration.
Optimized - This is the default setting. This mode offers optimized performance in almost all scenarios.
Standard - In some unique cases it may be necessary to downgrade to Standard performance to better interoperate with other flows on the WAN link.
Aggressive - Provides aggressive performance and should be used with caution. Recommended mostly for Data Replication scenarios.
(Max LAN to WAN Buffer and Max WAN to LAN Buffer)
This setting (OFF by default) penalizes flows that are slow to send data on the LAN side by artificially reducing their TCP receive window. This causes less data to be received and helps to reach a balance with the data sending rate on the LAN side.
This setting allows the appliance to present an artificially lowered WSF to the end host. This reduces the need for memory in scenarios where there are a lot of out-of-order packets being received from the LAN side. These out-of-order packets cause a lot of buffer utilization and maintenance.
Probe Interval - Time interval in seconds between two consecutive Keep Alive Probes
Probe Count - Maximum number of Keep Alive probes to send
First Timeout (Idle) - Time interval until the first Keep Alive timeout

Please send comments or suggestions regarding user documentation to techpubs@silver-peak.com.